2 Replies Latest reply on Apr 5, 2011 3:25 PM by PS::Chuck

    Using Pixel Bender to process Kinect Data?

    jblatta

      I am looking for a faster way to process the image stream coming out of the kinect.  I am using http://www.as3kinect.org/ to connect the kinect to flash via a socket server.

       

      Here is how it currently works, or at least my understanding of it:

      • Gets image from kinect depth camera buffer as bytearray
      • converts to bitmapData
      • applies a threshold to the image to make it just black and white
      • searches the image for white shapes using getColorBoundsRect, starting at 0,0 Top, Left
      • test to see if it meets the min/max size requirement to be counted as a blob(hand)
      • if it is the right size the point and rect data is added to an array for use in flash
      • it then fills that shape another color so it can loop back over the image to find the next white shape.

       

      So what I want to know is can this process be simplified using pixel blender? I know it can do the threshold part but since pixelbender is limited when used with flash I am not sure I can find the bounding boxes for the white shapes.  The goal here is to get back the point(x,y) of the center of each white shape that meets the min/max width/height requirement and to do it as fast as possible using pixel bender. The looping over the image in flash is slow and can make tracking multiple shapes laggy. Any thoughts?

        • 1. Re: Using Pixel Bender to process Kinect Data?
          PS::Chuck Adobe Employee
          Pixel Bender 3D and the languages like it like HLSL, GLSL, and GLSL/ES use
          a kind of parallelism called gather.  To calculate a particular value,
          such as a pixel or a vertex, the system looks at values which are in some
          sense "nearby".  The number of values produced, however, is not small.  In
          the case of an image processing use of a gather language, you generate all
          the pixels in a requested portion of the image.
              It sounds like what you want to do is something called scatter, where you
          take the values under consideration for a particular point, make a
          decision about them, and then put (accumulate) that decision in a place
          which can also be affected by another of these parallel computations.  A
          classic scatter problem is taking an image histogram where you divide up
          the color space into a finite number of bins and then calculate how many
          pixels fall into each bin.
              There are some tricks you can use to use a gather model to produce scatter
          like information through repeated reducing calculations, but it's not a
          particularly simple process and I'm pretty fuzzy on the details.  If you
          were writing for the desktop, I'd suggest you have a look at GPGPU
          programming APIs like OpenCL, CUDA, or DirectCompute.  Unfortunately,
          GPGPU on mobile is in a completely nascent stage right now and the capabilities of
          mobile parts is the space we're focusing on most heavily with Molehill and Pixel Bender 3D
          in this go-around.
          Chuck.
          • 2. Re: Using Pixel Bender to process Kinect Data?
            PS::Chuck Adobe Employee

            I realized that the example language and runtime I called out are specifically the 3D variant of Pixel Bender and the upcoming Flash player with 3D support called molehill.

             

            That said, my comments are still accurate for the Flash Pixel Bender 2D support and the Pixel Bender support via toolkit / AfterEffects / Photoshop.  They all follow the same gather model.  We are making some use of scatter in AIF for AfterEffects & Photoshop, but not in a way that is at present programmable via Pixel Bender.

             

            Thanks,

            Chuck.