Looks okay within the limits of what AE's bilinear filtering can do. The only thing I could think of is that your keying is softening the stuff in addition, but that's hard to tell from a JPEG. Do you key before scaling or the other way around?
Hi Mylenium, thanks for the quick answer.
The results didn't seem too bad until I compared them to the SD backround footage which has detail in the faces.
The key is done (in a precomp) before the scaling. I thought it best to key it full size. Thoughts?
I had a similar thought about the key softening the image, so I tried turning it off (and all the other effects) and just let AE do its scale and move. There was no discernible difference (except of course the chromakey screen was still visible).
I am now experimenting with BCC Uprez to see if it can produce better results. I will post my findings.
Mmh, do you by any chance position the items so they fall victim to additional sub-pixel sampling, i.e. their position values do not consist of full pixel increments? This could be one thing. Anotehr one may be that you are using OpenGL in your main comp, but not the pre-comp. The "texture" might simply get softened up because your graphics card does not fully support OpenGL 2.0. I would also definitely try to pre-render the clip, then re-import it, replacing the source comp. Something is definitely up, but it's certainly a minor thing. Also of course consider motion blur, frameblending or general mismatches in framerates between comps...
1 person found this helpful
I have a few suggestions for you that will greatly improve the look of HD footage scaled to SD. This works for any image you reduce in size.
- Separate fields for interlaced footage making sure that preserve edge quality is enabled. If your footage has some kind of pulldown scheme (ie 24P video) then make sure that you properly separate the fields so that you are working with full frames.
- Pull your key in native resolution. The easiest way to do this is the just drag the footage that you want to key into the new composition window. Make sure that you are looking at the comp with pixel aspect ratio correction turned off so that you can see the actual pixels you are working with.
- Do everything you can in native HD. Pull the key, drop in the background, animate, color correct. If your background plate is SD then at least pull the key in HD.
- If working in native resolution with pixel aspect ratio correction for the comp window is turned off is driving you nuts then nest your keyed footage in a HD square pixel comp to complete your animation.
- If your workflow requires you to pull the key first then work in SD then the best practice is to NOT work in a square pixel SD comp unless the majority of the elements in the project were originally square pixel sources.
- When you move your HD footage to the SD comp make sure that you do not use the Fit to comp command. Use instead the Fit Horizontally command to preserve pixel aspect ratio. If your X and Y scale values are not identical you will degrade the image and even distort it a bit. If the HD footage is scaled to fit the design instead of fill the frame the same rule applies. X and Y scale values must be identical.
- Last but not least, apply Unsharp Mask or at least Sharpen to your HD footage and adjust for maximum clarity with minimum noise. You'd never, at least you should never reduce the size of a still image in Photoshop without sharpening the image. The same goes for video. If you scale it down you should sharpen it.
I took a loot at your original. It isn't very sharp to start with (meaning the detail level on the camera was set correctly for shooting greenscreen) so it is reasonable to expect that it will require some sharpening when added to any composition. Way back in the day when I worked with studio cameras and a switcher we always matched the detail level of the cameras as well as the color. There's no reason you shouldn't include matching detail levels in your compositions as well and the Unsharp Mask effect is an adequate tool for the job.
Hi Rick & Mylenium, thanks for your suggestions.
My workflow was pretty much as you suggested Rick,except for the last two items. My scale was not uniform, I fixed this but without any improvement. I then tried unsharp mask, which did bring back some detail in the final frame, however it introduced some more flickering in the preceding motion. (The frame I have shown is the end of a walk sequence and the keyed footage had to be scaled and positioned over time to match the live action bg).
I will try unsharp mask again on the next job (similar) and animate its effects to minimize the flickering.
I also tried using an expression to round off the position on each frame to whole numbers to try and avoid sub-pixel positioning but this just made the motion stutter. I am not using openGL either.
Re: the original shot being soft, I think that must be the jpeg compression because in Final Cut on an HD monitor the raw footage is quite sharp.
I have always had the usual expected problems scaling source material larger than it's original size, but I have never experienced this amount of image degradation when scaling footage down before. (I have used AE since it was COSA)
Just a thought, don't 3D programs render larger frames and then scale them down (oversampling) to achieve more detail in finished frames, or am I confusing techniques.
Has AE change it's scaling algorithm at all ?
Thanks very much for your help.
Has AE change it's scaling algorithm at all ?
Nope, AE is bilinear all the way even today. That's not per se a bad thing, though, as it eliminates certain problems with temporal stability and how some operations may result in unfavorable sub-pixel patterns, it just isn't really able to deal with the amount of detail HD footage has. For the time being, using third-party resizing plug-ins is the only way to circumvent that limitation. As for 3D programs - they usually don't render anything larger than necessary. Oversampling/ multisampling has a different meaning there. E.g. for antialiasing you would not render larger, but distribute the rays of the renderer around a given screen coordinate or intersection point with a 3D surface. You would do this with a given pattern (similar to 2D blur and sharpening kernels, for instance as a 3x3 grid), randomly or a combination of different methods (semi-ordered). After the different testing operations you would interpolate the results to give the final value of the point, which then, as that result becomes part of the calculations for the neighboring points, would result in smooth edges and shading and all the other stuff we so love...