0 Replies Latest reply on May 14, 2010 4:40 PM by Aaronius9er9er

    ShaderFilter on zoomed images

    Aaronius9er9er Level 1

      Hey everyone,


      I'm trying to implement a somewhat simplified photo editing tool like Photoshop.com or Aviary Phoenix for a client.  We're providing options to the user to perform photo-wide changes like hue/brightness/contrast and scoped changes like fixing blemishes, red eye, etc.  Right now we have things working okay.  We're applying photo-wide changes using a ShaderFilter on the main sprite and the scoped changes are children sprites with bitmap fills created from the result of ShaderJobs.  This works nicely but we've run into the following problems:


      (1) When the user zooms in on the image so that the image is really large, we get this warning: "Warning: Filter will not render.  The DisplayObject's filtered dimensions (4820, 3615) are too large to be drawn." and the filters disappear.  From my understanding, filters don't work once the bitmap gets beyond 16,777,215 pixels (even though the original bitmap data is smaller than that).


      (2) If the user sets a hue and then makes scoped changes like fixing red eye, it takes a few seconds for the changes to occur.  Without the hue set beforehand, it's very fast.  While the red-eye fix is processed quickly (~2ms), it appears that the time delay occurs when the hue filter has to re-execute.  I'm assuming it's re-executing anyway...that's my understanding of ShaderFilters.


      So, I went looking at Photoshop.com and Aviary and both seem to let you zoom into an image really far (seemingly larger than 16,777,215 pixels), set a hue, and see the results.  I would assume they're not using a ShaderFilter then?  Are they modifying the actual pixels of the bitmap?


      If they're not using a ShaderFilter, then how are they managing undo/redo?  At the moment, if the user hits undo, we are just removing or changing the ShaderFilter.  Likewise, if the user uses the redeye brush and then hits undo, we remove the sprite that was created from that operation.  So if Photoshop.com or Aviary is modifying the actual pixels of the bitmap, and the user were to use red eye (scoped), then hue (photo-wide), then red eye (scoped), then hit undo three times, how are they getting back to the previous states?  I can think of a couple ways but they seem fairly impractical.


      If anyone has any insight or good articles on this it would be much appreciated.  Thanks!