2 Replies Latest reply on Nov 14, 2008 2:57 PM by DavideBarranca

    needed() and changed() region function

    DavideBarranca Level 1
      Hello,
      I'm studying the basicBoxBlur code that comes as sample in PBToolkit. As I'm a beginner in code writing, I have some troubles understanding properly the use of needed() and changed() region function: are they supposed to simplify the PB calculations, since the filter runs even commenting those lines?
      Could you gently provide some more info about their role specifically in the basicBoxBlur code?
      I'm sorry if this appears as dummy question, but the my personal learning curve is quite steep... (my actual goal is to be able to rewrite some sort of Gaussian Blur, Maximum, Minimum and Median Photoshop filters, so this is only a initial step toward them - I'll be fighting with sorting algorithms later on :)
      TIA,

      Davide Barranca


        • 1. Re: needed() and changed() region function
          BrianRonan
          Hi Davide,

          I'm really glad you've asked this question. It's a very good question. Region functions would be considered one of the more confusing, topics with respect to Pixel Bender. Unfortunately, this makes it difficult to explain in words, but we'll try anyway.

          The region functions are the mechanism by which you indicate to the product your running in (whether it's the toolkit, Photoshop, or After Effects) how your filter affects the size of the image. Currently, Flash does not support these functions in the Pixel Bender source, but you do need to implement the same thing in the supporting ActionScript code.

          If you think about a blur or any other convolution type filter, you end up sampling a certain window around the pixel in order to get the output color. For instance, if you have a blur on the X axis that's one pixel in radius, you end up sampling 3 pixels for every output: one to the left, the pixel that's at the same coordinate as the output pixel, and one to the right of that. Simple. If you consider what happens at the edge of the image, though, things get more complicated. If you processing the result for the very left edge of the output image, say at coordinate (0,0), you'll need to sample three pixels, (-1, 0), (0, 0), (1, 0). You need to account for the negative values when sampling. There is a similar problem on the other edge where you will be sampling at a location greater than the output window on the right. This means that to produce an output of size 512, you actually need an input of size 514 (one extra pixel for the left, and one extra pixel for the right). This is what the needed function is calculating. The function answers the question: "For an output of size X, what size input do I need?" In most cases this will be exactly the same as the output, but in some examples, like BoxBlur, this is not the case.

          The same thing happens on the output as well. Consider the one dimensional box blur with a radius of 1 again. In this case, you provide it an input of size X. If we were interested in displaying any pixels that had any color at all, we would get an output of size X + 2. This is because the edge pixel contribute to the coloring of pixels outside of the input dimensions. In other words, we would get a non-black pixel at location (-1, 0) because the pixel at (0, 0) would be within the radius. This is what gives you a smearing of the image outside of its boundaries when you apply a blur to it. This is what the changed function is calculating. In other words, it asks the question: "If I give the filter an input of size X, what sized output would it produce?"

          Again, for most cases, this would be the same as the input size. Filters that fall into this category are color corrections. Additionally, for convolutions and blurs, the needed function would be the same as the changed function. For warps and other transformations, this is not the case. The best example of this is scaling the image by two.

          When you commented out the region functions, the output image became smaller by the blur radius. This was probably not very noticeable, but if you made the radius large, you would see the difference. Additionally, you are probably asking why we need this level of detail in the filter. In most simple examples, no one would ever notice this since we often have a single input image and execute a single filter on it. However, because these filters can be used as a small part of the workflow for professional graphics and effects applications, getting these details right is very important for the filter to render the correct results in all cases.


          Since you asked specifically about the basicBoxBlur sample, here's a breakdown. Note that the functions are exactly the same code because it's a convolution.

          float2 singlePixel = pixelSize(src);

          This is getting the pixel aspect ratio. One thing I didn't mention is the notion of the pixel aspect ratio and the need to account for this when calculating the regions (this is the subject of another post entirely).

          return outset(outputRegion, float2(singlePixel.x * ceil(blurRadius), singlePixel.y * ceil(blurRadius)));

          The next line is increasing the requested region by the size of the blur window radius (conceptually expand the single radius out to a radius of size blurRadius). We take the ceiling of the blur radius in case the radius is a non-integral value.


          I hope that helps clear up any confusion. Please let me know if you have any questions or need clarification on any points. By the way, I'm very impressed with your list of filters, and I wish you the best of luck with the sorting ones.

          Thanks,
          Brian Ronan
          • 2. Re: needed() and changed() region function
            DavideBarranca Level 1
            Brian,
            thank you very much for your precious answer - now I finally got it!
            There's no doubt the extra accuracy provided by adding needed() and changed() is very important indeed. By the way, in the personal project I've just begun developing, the blur is just one step (as you can guess, maximum, minimum and median are also needed, and for sure I'll come back to look for more answers ;-)
            I'd like to ask you a small extra clarification: when you write
            float2 singlePixel = pixelSize(src);
            you're getting [1.0, 1.0] isn't it? Reading the PB language reference I first thought it was a function to get the image pixel size (640x480 for instance), and the pixelAspectRatio() to get the image ratio (4:3 for instance). I was wrong, we're just talking about pixel (and not image) size and ratio, right? (I know, the function name should have told me something)

            Maybe it's a bit offtopic, but can I ask you an expert advice for a good reading about digital image processing and (PB-like, so C-like I guess) code writing? I did a little research and found some books, but the problem is that their subjects are "theory/algorithm/math" only or not-so-useful (at least to me) languages, like Digital Image processing with MatLab. I come from a different background as I'm in the color correction business, but I would really like to improove my programming skills (which is easy, as they're close to zero) and finally forge some ideas I've been playing with in my spare time in the last few years. I don't currently know how far PB language will go in the future, but I've feelings that could be the right choice to start writing code (my alternate take is FilterMeister, which uses a superset of former Adobe Filter Factory - but it's PC only so...).
            Thanks again for your answers,
            best regards,

            Davide Barranca
            Bologna, Italy