I'm not sure if this is the right forum, but I'm hoping someone here might be able to help.
I have a still taken with a canon 60d. I also have movie footage, taken from exactly the same position. I want to match up the still pixel-for-pixel with the footage. The difficulty is that the still is 5184x3456 = 3:2 and the movie footage is 1920x1080 = 16:9
What I want to do, is take the still into Photoshop and crop/resize it so that it is 1920x1080, and it has a pixel-for-pixel match with the movie footage.
Looking at a screengrab from the footage, and comparing it to the still, it's clear that the still is cropped, but I haven't been able to work out exactly what sequence of actions will match the still to the screengrab.
A further complication is that 60d actually records at 1920x1088, not 1920x1080, although After Effects seems to ignore this.
Any help would be much appreciated.
'the right forum' depends on which software you're using - this is the Premiere Pro section. Let us know and we'll move the message to the right place for you.
If you're cropping in PS the easiest way to match the two images (still and an exported frame from the video) is to open the exported frame and add the still as a new layer, move and scale it to match (turn down the opacity on the still to see what you're doing then reset the opacity when done). Then save the file as a JPEG or TIFF, which will crop everything to the document size - as your base layer was the exported frame, the resulting file will be the same dimensions as the video. This of course assumes that the still and video were shot with camera and lens settings so that everything you can see in the video is also present in the still.
Premiere Pro, After Effects etc. can crop and zoom a still image too, but it's easier in PS if all you want is a static image (i.e. the zoom isn't animated).
You will find it tricky to get a perfect 1:1 pixel match, as the encoding for video used on DSLRs doesn't read the same pixels as a still shot does. There's interpolation and sampling going on, so you can get the two layers to be visually the same, but not mathematically.
Canon video at 1088 is always a bit confusing, it's down to the way the sensor polling takes place. Just place the footage into a 1920x1080 timeline at 100% and it'll crop off the left and/or right edges - do not use the 'scale to frame size' feature in Prem Pro or you'll force a resampling of every pixel, and lose a lot of quality.
Thanks for the reply. I am using Premiere Pro, also After Effects, and Mocha.
I have looked at making a manual/visual crop, as you suggested, but I'm really looking for a way to get a more accurate, mathmatical method.
The still and the footage were shot with the same camera, lens, settings, and on a tripod. The problem is that still has a different aspect ratio to the vidoe. Everything in the video is in the still, but not everything in the still is in the video. There is a strip cropped off from the top of the still, and one from the bottom.
What I really need to understand is how the camera takes the pixels from the sensor at 5184 x 3456 and downsizes them to 1920 x 1080. If I can understand that, then I can replicate the same process in Photoshop.
I assume it must be doing a crop and a resize, but I haven't been able to duplicate the process.
I've tried using the info panel to measure the size of the top/bottom crops, but the figures seem fairly random. If it were cropping to leave the height at an exact multiple of 1080 or 1088, I would understand, but it doesn't seem to be doing that.
I want to match up the still pixel-for-pixel with the footage.
That may not be possible, simply because the camera does not record the images that way. The full raster capture, used for photographs, is scaled for video. Pixels captured with the sensor are thrown out to 'dumb down' the full resolution image for the 1920 x 1080 needed for video.
I realise that the raw image will be sharper, but that's something I can deal with using Photoshop, or even within Premiere/After effects. The key thing is to get the two images lined up.
I agree that pixels are thrown away, but the camera is going to be doing that in the most efficient way. It won't be interpolating unless it has to. It throws away pixels along the top and bottom, the rest of the pixels all seem to be present. It might be sampling every other line, or something like that, but I feel there must be a way to work out which pixels it is keeping from the raw image. Really, all I need to do is find out the x,y co-ordinates of any two opposite corners.