If the angle of view is the same there is nothing you can do but accept the limitations of failing to plan this out carefully before you started shooting. Regardless of the number of pixels avaialble within the still image, you cannot create background that isn't there. However, you CAN bring a still image from the video into PS and assemble it into the photo to extend the background. It will always look like exactly what it is: simulated. The color spaces and scaling operations are totally different. It's like combining Kodachrome 25 with Ektachrome 400 that was pushed to1600; they're both transparency images but that's about all you can say.
There are also numerous tutorials for AE that offer advice and techniques for background extension. These assume some pre-planning.
Thanks for the reply. The angle of view is exactly the same for the still and the video. The camera was on a tripod. None of the settings were changed. I have no need of background extension.
The sensor size is 5184x3456 pixels. When shooting stills, all of those pixels are recorded. When shooting video, not all of those pixels are recorded. For video, the camera crops and resizes the image to 1920 x 1080 pixels. What I need to do is replicate the camera's actions. I need to take the 5184 x 3456 still and downsize it to 1920 x 1080, cropping and resizing as necessary. And I need to do it in exactly the same way that the camera does. That's the tricky part. It is not at all clear how the camera achieves that task. If you are able to enlighten me, or to give me a link that describes the process, I would be extremely grateful.
Place your video clip on the Make Comp icon at the bottom of the project window, and change the comp's duration to the amount you want.
Add the still to the comp on the layer above and reduce its opacity to... oh, say, 50%.
Play with this layer's Scale & Position properties until the two shots match. There's nothing automatic about this.
Change the upper layer's opacity back to 100% and move its in point as desired.
That's the method I'm using, or a variant of it. I'm trying to improve on that method to find a more precise one.
For every single frame, the camera takes a 5184x3456 image, feeds it into an algorithm, and a 1920x1088 pops out of the other end. I'm trying to find a way to replicate that alogrithm. I feel I ought to be able to do it in Photoshop by cropping and resizing. Once done, I can create a Photoshop action to reproduce it at will.
It would be good to know why this somewhat-strange practice is so important. It would be good to know what you're trying to accomplish. We may be able to offer better alternatives.
It isn't necessary for stop-frame animation. The most important things in stop-frame animation are planning, math and frame rate.
If you're trying to get a DSLR behave like a 4K RED camera for cheap, I can only say "good luck".
The camera is at one end of a room. At the other end of the room is a window. When exposing for the room, the window is overexposed. When exposing for the window, the room is underexposed. The video is exposed for the room, the still is exposed for the window. I'm using Mocha to track the shot during a small pan. The still was shot before the pan started.
The final shot has the video of the window on the top layer, and the still on the layer below. The Mocha mask cuts into the video layer, hiding the overexposed window in the video, and revealing the correctly exposed window in the still. The Mocha track moves the still, keeping it in synch with the mask. For this to work correctly (or at least for this to work at its best, without hacking it around) the still and the video need to line up accurately.
Oh, now I understand -- thanks!
This isn't an After Effects issue, it's a photography issue: gel the window before you shoot.
They make great, big gels that also do daylight-tungsten correction. Combined with similarly-sized neutral-density gels, you can make the window look just the way it should.
If you happen to respond, "Well, we've already shot this and we can't shoot over again," I'm afraid you're out of luck.
You can certainly use Mocha to track the still of the properly-exposed window into the shot: you don't need Photoshop for this. However, there are bound to be reflective surfaces in this room that won't look natural once you motion track the replacement window into place. And God help you if anyone in the room passed in front of the window in question: your woes are compounded by the phenomenon of light wrap. You'll never get those people to look right, even with painstaking rotoscoping.
I have a large roll of ND gel. That was one option. That stuff isn't cheap; I would have had to cut it; and it would have been time-consuming. What I've been doing since this shoot is just shooting a second or two of video exposed for the window. Then, in post, taking a snapshot from that and using it in place of the still. The technique works very well.
I still have the files from a couple of jobs where I shot raw stills instead of video. I'd like to be able get them fixed, hence this thread. Also, I would be happier shooting using raw stills than video grabs. And it seems to me that if I can just find someone who can tell me how to crop/resize a raw still to end up with a 1920x1080/8 frame, in the same way the camera does it, then I'll have a fix that I can use for every job from here on in.
> I have a still taken with a canon 60d. I also have movie footage, taken from exactly the same position. <
Oh, you mean you shot video with the same camera. I thought you were using a real video camera. Attempting to do HDR with a combination of video and stills is as interesting idea but it would have been easier if you had lit the scene properly for video.
Hope LaRonde can figure it out for you.
how to crop/resize a raw still to end up with a 1920x1080/8 frame, in the same way the camera does it, then I'll have a fix that I can use for every job from here on in.
That is impossible from a technical point of view, plain and simple. Your camera doesn't crop when it shoots video, it simply doesn't use the entire sensor area in the fist place and then applies it's chroma undersampling and spatial interpolation algorithms to produce the video image, meaning that in fact it may only use every second or third row and column in the sensor and switches the rest into a redundant mode to gather more light or turns them off plus leaves a pretty big margin near the edges of the sensor for electronic stabilization and to avoid edge noise from random electrical discharges. It's a simple matter of limited processor power being available in the camera - it couldn't record full size and crop if it as much as anyone wanted it to. Therefore there is no way to 100% line up everything, since different crop area/ filmback means a slight discrepancy in the resulting projection which cannot be eliminated without using further algorithms. So no matter what you try, there wil lalways be some odd areas. Things like lens distortion removal in Photoshop may help, though.
But surely it must be using the same subset of pixels from frame to frame. Whatever algorithm it uses - let's say it skips 100 pixels at the top. then uses every third line, and skips 100 at the bottom - then it must use that same algorithm for every frame. So if I could find out what that algorithm is, then I could reproduce it in Photoshop. In the above example, I would just crop 100 pixels from the top and 100 pixels from the bottom. I'm sure I must be missing something here, but I can't see what.
It's not that simple. Even this pattern may shift around, as the camera may use an odd/ even alternating mode to optimize processing speeds further and minimze rolling shutter by doubling the temporal precision. If there were a surefire way to tell you that the images align at certain coordinates by scaling them by a certain factor, I'm sure everybody would do it already, but alas, since things aren't that way, you will just have to figure this out. I'm sure looking up the specs of the camera and determining the actual crops used on teh sensor could help figure out the basic math and the rest is really just experimentation and comparing stuff with the Difference blending modes...
Yep, I'll dig into a bit further, and keep looking. I'm convinced that it must use the same set of pixels for every frame, otherwise the image would be jumping around all over the place. That being the case, there must be a repeatable method of going from raw image size to the video image size. ( I think!)
I'm convinced that it must use the same set of pixels for every frame, otherwise the image would be jumping around all over the place.
You may be correct, but there's yet another consideration: the codec of the video. It's H.264, which has 4-2-0 color sampling, aka color resolution.
Think of the video image as having two layers -- luminance and color -- that combine to create the total image. Your camera records each and every luminance pixel. However it only records every fourth color pixel, a bandwidth-saving trick that is inherent in the codec and can NOT be turned off. When the image is played back, those color pixels are blown up four times to cover the entire image. The result is good enough to fool the human eye, but not a computer.
When your camera's in the still mode, it faithfully records each and every pixel available to the sensor in the highest quality possible: a far different result.
Thus, the resolution of the colors will never match from video to still.
I fear you're tilting at windmills here. Time to haul out the gels for the windows.