9 Replies Latest reply on Oct 9, 2007 6:21 PM by PierreJasmin

    Hydra API documentation

    Cédric Néhémie
      Hi all,

      First, let me say that AIF is THE tool I waiting for, having the abilty to create custom filters is one of the best improvement in Astro (with 3D support, of course ^^ ).

      As I start to discover, as for everybody here I guess, I wonder where we can find the Hydra API documentation ? Is it a part of the After Effect Documentation ?
        • 1. Re: Hydra API documentation
          Kevin Goldsmith Level 3
          The Hydra API documentation is included with the AIF Technology preview as a PDF. You can get it from the help menu in the toolkit or it is under:
          \Program Files\Adobe\AIF Toolkit\Docs (windows)
          or
          /Applications/AIF Toolkit/Docs (mac)

          the file is Hydra10.pdf
          • 2. Re: Hydra API documentation
            Cédric Néhémie Level 1
            Thanks Kevin :)
            • 3. Re: Hydra API documentation
              grimmwerks
              Yeah, this is fantastic stuff; I'm hoping it runs well on video -- fullscreen?!?
              It's kind of reminding me of Apple's Quartz Composer or is it based on OpenGL kernel programming?
              • 4. Re: Hydra API documentation
                Kevin Goldsmith Level 3
                It is based on OpenGL Kernel programming as is Quartz Composer (to the best of my knowledge).
                • 5. Re: Hydra API documentation

                  I don't fully understand a few things, here's my first naive read notes

                  1* Licensing: I note The licensing now permits only "internal" use... is this technology planned only for Adobe apps?
                  2* Purpose: Will the sample app become available as a C++ example or something like that -> a C++ library (so it's clear how to load shaders, load and collect back images) - I am not quite sure what is the application of the toolkit - language by itself?
                  3* ImageSize: Can I see an example which would be a linear ramp without arguments that returns 0 at the bottom and 1.0 at the top - without parameters.
                  Are all regions remapped to global res (w,h)? - so a large input could be have as DOD of -500,-500 to 2900,2900 on a 720,480 render canvas... ?
                  4* PixelSize: Perhaps same question
                  If I have two images of different sizes in and out, sampleNearest( src,outcoord +/-) is in what space? - wouldn't there be a need for an incoord() as well? +/- if you want to live without explicit image sizes don't you need pixel sizes to properly sample "real" pixels? .
                  5* None of your examples seem to worry about border conditions that you might sample off the buffer -- is that to simplify the examples,
                  eg:
                  colorAccumulator += sampleNearest(src, outCoord() + float2(-1.0, 0.0));
                  denominator++;
                  Shouldn't that crash on first column of pixels OR simply throw an GL_INVALID_VALUE (wait is this GLSlang extention or look-alike alternative)

                  pierre@revisionfx.com



                  • 6. Re: Hydra API documentation
                    Elba Sobrino Adobe Employee
                    I can answer your first two questions but will let one of the more technical folks on the team reply to the others:
                    1* At this time, AIF and Hydra are planned for integration only within Adobe apps. But, who knows what the future will bring based on adoption of the technology.
                    2* Since we are not exposing a C++ interface to the AIF technology, at this time, there are no plans to provide the sample app as a C++ example. The toolkit is intended as a development environment for Hydra filters - editing, debugging and previewing of filters before using them within an Adobe app.
                    • 7. Re: Hydra API documentation
                      AIF Bob Level 3
                      Pierre, I can address some of your points:

                      #2 The current toolkit app isn't particulary useful by itself, it was a way for us to get the technology out to users so that they could experiment with it before we released Astro. It also gives us a chance to get feedback from the developer community.

                      #3 & #4 I'm not sure I understand the question. I'll give some more information about the subject and hope that it helps - if it doesn't please ask again. All images (input and output) are assumed to be mapped to the same coordinate system - they all have their origin in the same place and the pixels are all assumed to be the same size. Having said that, if you want to pass in parameters to indicate the offset or pixel aspect ratio of particular images there's nothing to stop you from doing that. Your kernel can then use these parameters to control how it samples the inputs.

                      #5 Sampling off the edge of the image returns transparent black pixels, it will never crash. All images are assumed to be infinite, however they have a finite domain of definition. If you need to know where the edge of the image is in order to do something special at the edge (e.g. repeating or reflecting the edge pixels), you can get that information from a parameter - search for "referenceFrame" in the Hydra spec.

                      • 8. Re: Hydra API documentation
                        BrianRonan
                        Hi PierreJasmin,

                        Very good questions. Hopefully the following answers will help clear things up:

                        3* ImageSize: Can I see an example which would be a linear ramp without arguments that returns 0 at the bottom and 1.0 at the top - without parameters.

                        The example that you're requesting cannot be implemented without parameters. At this time, the output image size is not available to you in evaluate pixels except by way of a user specified parameter.

                        4* If I have two images of different sizes in and out, sampleNearest( src,outcoord +/-) is in what space? - wouldn't there be a need for an incoord() as well? +/- if you want to live without explicit image sizes don't you need pixel sizes to properly sample "real" pixels? .

                        outCoord() is always in reference to the output region. Hopefully, the different sizes are known to the algorithm (because that's what's actually changing the image size), so you can adjust the coordinates accordingly.

                        5* None of your examples seem to worry about border conditions that you might sample off the buffer -- is that to simplify the examples,
                        eg:
                        colorAccumulator += sampleNearest(src, outCoord() + float2(-1.0, 0.0));
                        denominator++;
                        Shouldn't that crash on first column of pixels OR simply throw an GL_INVALID_VALUE (wait is this GLSlang extention or look-alike alternative)

                        Yes, we implemented samples for clarity, not completeness. The code that you posted will not, however, throw an exception or crash. We've "increased" the size of the buffers infinitely by filling the non-image areas with transparent black. This can have the unfortunate side effect of bringing transparent black into the image, but that can be worked around by clamping the sample location against a user parameter.
                        • 9. Re: Hydra API documentation
                          Level 1
                          answers back
                          1) so it's not for me :) - I will just note that one reason we don't use things like Rapid Mind framework (or even look at them seriously) is their licensing model
                          2) seems in the long term this is perfect technology for embedding in imaging devices like camera (so you shoot raw but maintain image enhancement setups). I understand you first have to have client software that can receive that. I also understand your requirement although I don't do this kind of work to have sort of embeddable code for web based image processing. In a compositing app like AE I would say that outside of a context such as something like PS Filter Factory ++ I don't see much applications
                          3-4) this is obviously not considering effects that receive two inputs as based on your description as you are even more lost in space then within an AE SmartFX - in AE at least by calling two times an input with the preserve flag true or false, we can figure out where we are and know where a eg point param returned in layer ref width and height actually maps in an input... It's not just passing params (if I am not the host on top), it's also considering the issues of cascading these sort of things. For example your twirly effect should expand the output region larger then the initial input size (grow the bounds and if animated that would change every frame - so not something that can be put as UI parametrics), return some outcoords that may be things like -50,-50 to 770 by 520 on an inital reference size of 640x480, no? And if so a simple thing like a second effect that applies a translate is possibly lost in space no?
                          I noted as well that it looked like you assume region always grow the same on each side of a rect as well. Your sample app does not seem to support applying two effects in a row (it crashes).
                          5) isn't this a conceptual error, your blur can extend the output by 1 if you want such behavior for cases like text characters, the host should never do that. You can pad the allocated memory by a pixel but the nearest sample should be within the valid image rect so as a default behavior copy the nearest if you try to sample outside the box.
                          What you do seems different to me different then say an OGL Convolution Border mode with a color abyss (CONVOLUTION_BORDER_COLOR) of 0,0,0,0 (an app can have such default STATE, but not randomly add a transparent pixel around a sprite)

                          I think since this is obviously more for a web type developer that you should consider a view of the world where there is a static frame of reference for a sequence of sprites (a virtual frame buffer of fixed size on your infinite canvas as part of some sort of clip object and the DOD that can vary every frame is the actual allocated pixels and outside of which it's the background color. Noting how many bugs the sum of anyone who ported something to the smart fx api in AE (including examples AE ported themselves,and indirect bug that properly following the specs reveal in the app), I would consider this a warning as you are even building up even more assumptions then that API rather then fixing slightly the mental model you operate on, say.

                          respectfully

                          Pierre,
                          I live here, pierre@revisionfx.com -- I might not be back on this list for a while, was just curious