7 Replies Latest reply on Apr 19, 2012 8:54 AM by gabeartist01

    Why has this Z depth expression not yet been made?


      I recently had an issue where I needed to composite two objects from a 3d application together on separate layers..  But the objects interact in such a way where sometimes an object from layer A is in front, and sometimes the object from layer B is in front.....  But the layer order would give me a result where either A or B is in front the whole time depending which layer is on top in After effects.



      (Yes I know in that example you could simply mask it, but in reality there were several objects all dealing with the issue named, and it would have been too complicated to do so.)





      So in my search for the answer I found this tutorial...






      This is a good answer if you're only dealing with two layers...  But it doesn't have an answer if you're dealing with any amount of layers more then two..... 



      Soooo here's my question, why has nobody made a script, option, or button in After Effects yet that simply says "do you want to use the Z depth in your RPF/RLA file to determine where the objects should fit in the scene???"



      I ask only because it seems like the most obvious thing to do.  If the 3-d application went through the trouble to create the option to have Z depth information, and even has file types that support having that information, then why wouldn't the compositing program have an option to use the information in the most obvious way?

        • 1. Re: Why has this Z depth expression not yet been made?
          Mylenium Legend

          It's not obvious, it's complex. Z-buffer data bears no relation to physical depth and unless you give a baseleine reference, it's useless. Furthermore by no means must such data be linear, depending on how the 3D program creating it treats "depth" in its camera system. And where it falls most obviously apart - except for sampleImage() there is no way to access pixel data. Since that would need to be done in loops and on multiple layers, it would be slow as hog. So despite your assertions, this is far from simple or trivial in many ways and for performance reasons alone best left to plug-ins liek Buena Depth Cue or andsy of your facorite Z buffer tools/ DOF blur tools where the user simply can pick a desired depth plane with a color picker or crosshair and the code does all the hard work.



          • 2. Re: Why has this Z depth expression not yet been made?
            gabeartist01 Newcomer

            Thanks Mylenium....  it took a bit of searhing, but here's the plug in that does it







            Here's more information for anyone searching.


            • 3. Re: Why has this Z depth expression not yet been made?
              Mylenium Legend

              Yes, it's a Pixelbender plug-in, which is a whole different thing than expressions, which is what you asked about...



              • 4. Re: Why has this Z depth expression not yet been made?
                gabeartist01 Newcomer

                Yea I know, until you said something about a plug-in I didn't even consider it.  I wouldn't have been against a solution that's not an expression, I just didn't know a plug-in would be the solution for this problem.

                • 6. Re: Why has this Z depth expression not yet been made?
                  Rick Gerard Mythic

                  Hey Petterson's to set up your render passes in a more efficient way. Rendering out a separate alpha for each object that takes into account the other objects in the seen is easy in most 3-D applications. That will solve your crossing problem.

                  • 7. Re: Why has this Z depth expression not yet been made?
                    gabeartist01 Newcomer

                    I was just looking more into that!  And that seemed like an even better way to go.  But I noticed an interesting issue.  I'm rendering out of Maya 2012, and found that if I follow Autodesk's tutorial  it runs me into an issue...In that youtube tutorial where they explain render passes with the space ship, the instructor adds the various objects to different contribution maps, then the layer with the background he tells maya to make a "beauty pass"....Then in the attribute editor he edits the beauty pass node by clicking on the "hold attribute."  This masks out any area where the object is covered!! Great solution.... However this poses two problems for me...


                    1.That in the compositing program there exists about a one pixel area separating objects.... (This can be solved by duplicating the object, and adding a minmax effect to it, spreading the color out about one pixel, and setting that layer behind the original.  But that means you're going to double every layer once, and add an effect to it.  Not very efficient as far as render time goes...)....So if you have found a better way to solve this issue, I'm totally open to suggestions!



                    2.There's the issue that if I have a camera edit.....then I need to go back and re-render each image sequence... This could be ten different objects...  And since the parallax changes the shape and alpha, there's nothing that can be done about that.   But the real kicker is that if you used the contribution maps, you need to tell maya to batch render each contribution map individually one after the other, and name the image sequence correctly each time replacing your old image sequences....What a pain!!  It'd be so much easier if it were just like the render layers... Just check mark each one you want to re-render and click batch render, and then boom!  It re-renders each render layer correctly for you where the project is set, no need to re-name or anything... Far more organized.  So why don't I just do that instead of using contribution maps you ask?  Well that won't give me masked out renders becaus objects on separate render layers don't interact, so that "hold attribute" won't do me any good.


                    Since I'm new to this, I'm sure there's got to be an easier way to do this, or something that I've overlooked.  Let me know if you have any ideas, thanks!