This content has been marked as final. Show 7 replies
So you're dissolving each image to zero as the camera strikes it?
I've done many similar comps but I use a camera to accomplish the move. That's much easier than attempting to keep rearranging the individual items. Trish and Chris Meyer have tutorials in one of their books that helps explain AE's 3d system as "postcards in space" or, as I like to think of it, sheets hanging in a dark garage.
David, I can see my description of what I am doing isn't very clear. What I am doing is similar to your "sheets in the garage" metaphor.
Let me try again using photos hanging in the garage. I have four photos. Let's say the first is a wide angle shot of a football field, taken from the north end zone, pointing toward the goal in the south end zone. For the second shot, I keep the exact same lense settings (i.e., no zoom) and move to the 30 yard line where I take a 2nd photo, pointing toward the south end zone. I do the same at the 50 yard line and the 30 on the other end of the field, ending with a somewhat tight crop of the south end goal.
Suppose instead I had just taken the first photo. I could have set up a virtual dolly and moved it from the full view of the football field, to an close up view of the goal in the south end zone. Of course, the problem is, it would be such a tight crop, the resulting resolution would be very low.
That is the problem I am addressing with the four photos. If they are aligned properly, I can move the virtual dolly through them to my tight south end crop and preserve acceptable resolution throughout the move. And, it will look to the viewer as if it was just one image.
If I were zooming I would call this superzoom, similar to what Google Earth does. I guess I'll call this superdolly push.
For this to work, each photo in the stack has to align properly with the one before it. It is that with which I am struggling for a better way.
After my first post I ran across some discussion of the auto-align tool in Photoshop CS3. I am still on CS2 so I can't try it, but it sounds like it does what I would am doing manually in AE. Basically, it looks for places where pixel patterns are the same in the images in two layers, and aligns on those pixel patterns. It may have to do perspective correction to get this correct alignment.
A key difference in what I am doing is the position in z-space also has to be varied. If, for example, I had only photo 1 and photo 4 from my football field shoot, in AE I would photo 1 above photo 2, reduce the opacity of Photo 1, and move photo 2 in X,Y and Z until the goal post aligned perfectly. If one of the shots was taken slightly off-plane from the other I would also have some perspective issues to deal with.
I am wondering if there are any special tricks, something like placing very bright alignment markers on the original photos, and then removing them in post.
David, one thing I would like to hear more on is what value the 3d camera (vs animating a null object's position) provides for a dolly push through a stack of photos. I HAVE used the camera where, for example, I want to take advantage of DOF properties (e.g., rack focus). Do you think it could somehow simplify things for this dolly situation?
I appreciate your insights.
There's a flaw in your theory. Perspective is tied to camera position not lens choice or angle of view. If you change the camera position from the end zone to the 50 yard line, even in the camera was mounted on a dolly track, the perspective would change and your shots wouldn't seamlessly line up. Moving that far, they wouldn't even be close.
The reason that the ultimate zoom from space shots work is that there's very little perspective to deal with.
If I was faced with your project I'd get or rent a VR camera rig so that you could move the camera around the optical center of your lens, then I'd shoot a bunch of stills from the same location with a lens long enough (telephoto) to get the end shot, stitch them all together in Photoshop. Once you've assembled the huge image in Photoshop it's a pretty simple matter to make several versions where you reduce the canvas size of each copy by say 50% until you're down to about twice your comp dimensions. Then you simply resize all images to twice the comp size, line them up equal distance from the camera, and push the camera into the images. The distance between images should be exactly 1/2 the zoom value of the camera. IOW, if the zoom value was set to 2000, then your images should be 1000 pixels apart. You then switch images when the distance from the camera to the image is exactly 1/2 the zoom value. This will provide a smooth ultimate zoom. This will not simulate a dolly or trucking shot. For that you need a lot of images spaced only a small percentage of the entire distance covered.
I hope you followed that. I've been thinking of doing a tutorial on this procedure for quite a while.
First, thanks for explaining this; your explanation makes perfect sense. I do have one question: You said if the zoom value was set to 2000, then your images should be 1000 pixels apart. I assume from this that the AE position units are pixels, right? IOW, if picture one is at Z=1000, picture two is at Z=2000. Is this correct?
I have PTGui for stitching panoramas and have found it to work well even without a VR camera rig. I'll give it a try with your technique.
Wouldn't it be cool if there were an AE plugin that would do auto align and perspective correction across a stack of images - kinda like a 3d version of PTGui? Or a 3d version of CS3's Auto Align?
Again, thanks for your great insights.
Yupp, some of those features would be useful on occasion, but since AE's 3D does not work on per-pixel evaluations but rather on a custom sorting algorithm that sees the layer as a singular entity, this is not going to happen soon. You could create some functionality with expressiosn and scripts, but because of the way it is rendered, there will always be issues near the seams or in areas with different Alpha opacity.
Thanks to all who are educating me on this.
Rick, after thinking about your suggestion - what value does it add to break the large "stitched" image into a series of crops? Why not just zoom into the single large image? Is it because AE wouldn't be able to handle the large file size of a single stitched image?
I recently did a project where the original image, a panorama of a construction site, was 400 MB or about 25000 X 6000 pixels... It took me about 10 minutes to cut this image into sections in photoshop, resize the pieces, do the appropriate treatment to prepare for video and dump 6 smaller psd files into the comp. An image this size would really be difficult to manipulate in AE, and the wide (most of the original image) wouldn't look as good as it did after resizing and sharpening in Photoshop.
Does that cover it? Resizing a series of crops is faster to work with and can be vastly faster to render.