14 Replies Latest reply on Apr 17, 2009 3:00 PM by PierreJasmin

    Get Input Image Size?

    JoshSommers
      Is there any way to get the input image pixel dimensions?
        • 1. Re: Get Input Image Size?
          Kevin Goldsmith Level 3
          no, there is an explanation in the documentation. If you need the image dimensions, you can pass them in as a parameter.
          • 2. Re: Get Input Image Size?
            nojyfied
            which is quite hacky!! I'm doing the genie effect, and passing in the width and the height, creates all sorts of complications.

            doing someDispObject.filters = [new GenieDistort(0,100,0,10))]; would be much easier to use. than to watch for changes in dimensions, pass it everytime it changes. Creates alot of hacky code.

            What are the technical challenges in making this work?
            • 3. Re: Get Input Image Size?
              AIF Bob Level 3
              > What are the technical challenges in making this work?

              This is one of those things where it's easy to get the simple cases working, but there are a host of more difficult issues, particularly when we're dealing with graphs, that make the complex cases much more confusing.

              The concept of the "size" of an image is not a trivial one – it’s easy and obvious when you read an image in from a PNG file, however it’s less obvious once you’ve performed a Gaussian blur on that image. Where are the edges? Do the edges lie on pixel boundaries? Does the edge of the image match the edge of the buffer we have allocated for storing that image? We don’t have good, general purpose answers to those questions.

              We want to fix this, however we want to fix it correctly in a manner that works for all uses of Pixel Bender.
              • 4. Re: Get Input Image Size?
                JoshSommers Level 1
                Understood, but why not at least just make the dimensions of the actual starting input image available as global variables?
                • 5. Re: Get Input Image Size?
                  AIF Bob Level 3
                  quote:

                  Originally posted by: JoshSommers
                  Understood, but why not at least just make the dimensions of the actual starting input image available as global variables?


                  Because this is a great example of an easy case not generalizing well. If a flash developer is working with a single kernel the dimensions of the starting input image are easy to obtain, and there is no ambiguity about their meaning. For more general graph processing where an individual filter isn't even aware that it is part of a graph it isn't always clear what the starting input image is, let alone what its dimensions are.
                  • 6. Re: Get Input Image Size?
                    nojyfied Level 1
                    but when the filter is running, Its going to take an input image right? (a computed image from the other filters).
                    why not give the filter, the dimensions of the computed image.

                    when the CPU/GPU runs the filters, how big is the range of the for loops? may be that could give the range as some sort of global variable.

                    just an idea. I know you guys might have thought of this. I still don't get why it should be an issue in graphs. Could you demonstrate a sample use case?
                    • 7. Re: Get Input Image Size?
                      AIF Bob Level 3
                      quote:

                      Originally posted by: nojyfied
                      but when the filter is running, Its going to take an input image right? (a computed image from the other filters).
                      why not give the filter, the dimensions of the computed image.


                      Because in the general case this is not the information that you want. The dimensions of the computed image are calculated from the region reasoning functions and their job is to ensure that the buffer is big enough. They're also useful for efficient processing and helping the runtime avoid calculating pixels that haven't changed, or aren't displayed.

                      In the general case the size of the image is not the same as the size of the computed buffer.

                      quote:

                      Could you demonstrate a sample use case?


                      I'm going to use the same example as I did before - a gaussian blur filter. A "pure" gaussian blur filter gives an infinite output because the gaussian function (which supplies the weights for the blur) goes to infinity. In practice, dealing with infinite image is not practical, so one of the decisions that any implementor of a gaussian blur makes is where to cut off the gaussian function. There are several different ways of doing this, and different applications choose different ways of doing it. One of the imortant points is that the gaussian function isn't necessarily cut off on a pixel boundary - it may be cut off on a sub pixel level. We might start with an image that is 640 x 480 pixels, then after a horizontal 1D gaussian blur end up with an image that is 660.7 x 480 pixels.

                      We have to round up the 660.7 pixel width to get an integral number of pixels to store the result in, however the "edge" of the image (and if I haven't made it clear enough already the concept of the edge of the image is not always obvious) isn't on a pixel boundary.

                      I apologise for not being able to explain this better. We are looking at the problem, and I believe that there is a good solution out there, we just haven't tracked it down yet.
                      • 8. Re: Get Input Image Size?
                        maltaannon Level 1
                        Can't you use any of the region functions? needed(), generated() and so on? Seems like dod() holds the input dimentions, but I just can't get to it. Any ideas?
                        • 9. Re: Get Input Image Size?
                          maltaannon Level 1
                          Yup. I was right. How about this for a solution? http://maltaannon.com/blog/pixel-bender-input-dimensions/
                          • 10. Re: Get Input Image Size?
                            JoshSommers Level 1
                            Good find!

                            Unfortunately, I tried it and it doesn't seem to work for me.

                            :(
                            • 11. Re: Get Input Image Size?
                              Kevin Goldsmith Level 3
                              this was my response to the blog post:
                              Flash doesn’t support the region functions, so they aren’t called. In PS and AE, the region doesn’t always correspond to the entire input image either. Sorry…
                              • 12. Re: Get Input Image Size?
                                maltaannon Level 1
                                Is there any way to force it to do that?

                                Also I had another idea. I'm creating a filter that uses AE's Point Controller with default values of 0.5, 0.5 - that is the center of the image.

                                I thought that if I create a dependent variable called "center" and assign it's value in the evaluateDependents() function I could use it later on as the reference to the center of my image (if needed) but first and foremost I could simply multiply it by 2.0 and have the full image size.

                                This seemed a good idea. Unfortunately whenever I change the Point Controller in AE the "center" variable changes as well - seems like it's value is assigned "by reference" instead of "by value".

                                Is there any way to avoid that?
                                (I'm going to post this question as a new thread as well to keep things in order)
                                • 13. Re: Get Input Image Size?
                                  I'm not sure I understand why it would be difficult to pass the source image size? When dealing with the blur problem where you've sampled or manipulated pixels beyond the image bound why not have them clamped or wrap? This is what happens in HLSL and would seem to be pretty acceptable. I ran into the problem tonight as well of not having image size which stemmed from the coordinates being in world space instead of a 0-1 space (Which is what i'm used to with HLSL).
                                  • 14. Re: Get Input Image Size?


                                    The graph thing looks cool...
                                    I don't have time to play but since you ask:

                                     

                                    Thought of this simplification --

                                    I call it "Abyss" (a single value) and "abyss" (a larger set then the image)  (imagine a better word then lowercase "abyss")

                                    --

                                    The word Abyss actually connotes to me is a void outside?   In maths term I guess one would say an image is a subset of the domain of a function (i.e.: the domain: set of all students in school, image: the set of students in Molly's class). The complement of an image is either an Abyss (a singleton -- eg 0,0,0,0 for a 4 channels image) or another domain defined by "domain" && "not-Image". However I am not using the word image and domain of definition here has it might have other meanings in doc.
                                    --

                                     

                                    An "abyss" here is a set of 2 or 3 things tied to the inputs

                                     

                                    a. Is the abyss defined?:

                                    defined:   e.g. noise generator

                                    can be defined by an Abyss: e.g. adding 0,0,0,0 around a bounding box around a geometric shape
                                    not-defined: e.g. rendered elsewhere/captured image input loaded (is also not defined in AE if the layer is a comp). The complement of that image cannot be defined by an Abyss.

                                     

                                    b. Abyss Color:
                                    I use the term "abyss" not literaly in the sense of Pixar ICE language but they used the term "Abyss" (where I sampled the word) - when they tried the idea of a postscript for image processing after the Pixar Image Computer was scrapped -- where it meant the value outside of the defined region, which defaults to 0,0,0,0 right now in PB but you might want to overload that - eg background color

                                     

                                    c. A 32b floating point window  (or double maybe) initially at load image stage set to 0,0 Width,Height  (more on what this is later) -


                                    ///////////////////

                                    From there:

                                     

                                    1) A shape renderer that invokes PB can set his size to +1,+1 , it has now a defined abyss of color 0,0,0,0
                                    An input image cannot.do that. Undefined != Defined  (by extention, perhaps things like a comp in AE, have a static size over the sequence crop box so are undefined outside without user intervention -- without user changing the comp settings).

                                     

                                    2) For non single/point-process filters, if the abyss is not defined do not let the input region grow pass the union of source and destination abyss rect

                                    ** do not confuse image rect window and a kernel process region

                                    --  then the abyss rect is a benign thing to maintain, "defined abyss" inputs could also simply be initialised to have MAX_IMAGE_RECT+1 as abyss size, then a defined or not boolean  maybe does  not have to be set.
                                    sometimes things need to be in documented civilitiy rather then explicitely enforced. Right now in AE smartFx there is no distinction between scaling the output and adding pixels to blur (flexible at API level but interpretation vary if not clearly documented). For example if you do a 3x3 edge detection you will end up with a pixel edge frame if you pad with a transarent pixel.

                                         We ended up for many of our tools in AE so user can decide to add a menu to extend or not the smart rect... because of unpredicted behavior (referencable expectation).

                                         Also note there is not really a concept of a generator in an application like AE (except perhaps text and shape and solid layers), you apply typically a noise effect to a solid or some sort of layer...

                                     

                                    3) when you clamp the tile edge tiles you have to replicate the nearest pixel in the empty pixels if the abyss is not defined (or as per above if you are requesting to sample outside of the abyss rect) rather then simply pad with transparent pixels.   (ref: gl_clamp)

                                     

                                    4) Another note: I am not clear if pixel size reading the doc means proxy res

                                    if pixel size is defined as 1.0/pixel.xy,  eg 1/720.F already then you only need to maintain an additional point which is 0,0 in root node.
                                    Is just 1 extra thing to maintain in the node, no?

                                    ** You also need a way to scale some spatially dependent parameters based on proxy resolution as well (separate thing) - is that there?
                                    (i.e. in AE setting the display size to half res, the degraded res pref to 1/4...)

                                     

                                    5)

                                    Yes you can reset "pixel size" after a node is processed for the next node that will use it
                                    Then only one thing changes a pixel size, to be placed in a larger or smaller image window (that being a rectcopy or a cropped scale does not matter  -- i.e. rectzoom does not change the output size but it burns a different "pixel size")


                                    You might not need to expose the abyss rect directly to users, you can provide function calls for that
                                    The most typical request might be in normalized units if you are outside (under 0, over 1) and where is 0.5,0.5 center

                                     

                                    For illustration in RenderMan SL you can do things such as:

                                    output vector pixelvec = vtransform("raster",someVariable);
                                    note "raster" (one can request pixel-image space, screen coords...)
                                    In a system like Mental Ray, a sample has an x,y,z value and you have access to the camera transform to request the value at the proper level.


                                    Somehow the distance to 0,0 and 1,1 (the edges in normalized units) of a pixel coord to it's parented image frame must always be available. 
                                    Again near infinite image rects are kosher, but to treat them the same as undefined pixels from a loaded input image is conceptually/mathematically/artistically wrong


                                    Pierre at revisionfx dot com