I'd suggest making sure your composition is the frame rate you want and that AE interprets the clip at the frame rate you want, but feel free to run some tests and let us know how it's working for you.
Image sequences do not have a frame rate embedded in the metadata from most apps. You must set the interpretation. In AE and Premiere Pro image sequences always are interpreted at the frame rate set as the default in the properties panel. You can change the interpretation in Premiere Pro by selecting the footage and choosing Modify or in AE by selecting the footage and choosing Interpret Footage.
From your description it sounds like you don't have a really good grasp of frame rates and working with film transferred to video. I won't go through the whole thing right now but you generally don't want to use a time warp or time remap factor, you want to properly interpret your footage. If you are actually working with a frame for frame scan of super 8 shot at 18 fps then it would be best to interpret the footage at 18fps, put it in a comp that is set to the target frame rate you chosen for your delivery. Personally I almost never use 23.976 fps because the effect of JUDDER and the stroboscopic problems you run into at lower frame rates make life a lot more difficult. I would use 29.97 for everything unless I was going to PAL. The old notion that you can't make video look like film unless you run at 24 (23.976) is not true. I shot national commercials on film for many years and I ran the film camera at 29.97 fps to avoid dealing with the problem of 3:2 pulldown and they all looked like they were shot on film because they were. In fact about half of my work for major agencies came because you spots looked better than most other folks producing commercials on film.
If you are mixing the super 8 footage with video I would use 29.97 for the comp rate and just drop the image sequence in the timeline. If you are just using the super 8 footage then I probably would do the same thing or I would drop the footage in a 36 (2X) sequence or comp, add motiion graphics, then render at 29.97 and let the encoding take care of the frame blending where needed for the duplicate frames.
Thanks for the responses. I'm on CS6, so I haven't included Optical Flow in the discussion below.
Rick, you're right about my grasp of frame rates, and you could extend that to AE. In fact, I find AE intimidating. I spent years inside Indesign with no problems; Premiere, is a joy to use; but AE… it's complexity is almost beyond me.
Anyway, I'm playing around with scanning Super 8mm myself, with the end result being part of a 23.976 Blu-ray. The tiffs are taken straight from the 8mm film, so their real frame rate is 18 fps.
I have at least four choices:
- I could interpret at 23.976 inside Premiere (no need for AE), with the result being sped-up video. That's not undesirable, and I may decide to go this way. You see it in some documentaries, whether by choice or because the producer didn't bother correcting the speed, I'm not sure. The effect gives the viewer the feeling of: "Look at the strange motion. Obviously an old clip." So it can be valid, just like sepia toning.
- I could drop the frames, interpreted at 18fps, straight into a 23.976 Premiere timeline (again, no need for AE), and let Premiere duplicate frames. The result would be jerky video.
- As per three, but turn on Frame Blending. A smoother result than (2) – but I assumed that, generally, Frame Blending would be inferior to Time Remapping or Time Warping.
- For the smoothest motion, jump into AE (any frame rate), and Time Warp by 75%. AE interpolates extra frames. Doesn't matter what AE thinks the frame rate is, because I set that later on in Premiere. I render as a Tiff series, import into Premiere, interpret the Tiffs at 23.976 fps for insertion into a 23.976 timeline.
Is what I said in (4) the best approach? Assume, for the moment, that I want the most natural-looking, smoothest video.
You are making this way too hard.
if you footage was shot at fps then to play back in real time you should interpret it as 18 fps. If the footage was 30 FPS, to get the playback in real-time interpreted at 30 FPS.It doesn't matter what the frame rate of your comp or your sequence is. If you don't want to maintain real-time play back but you want every frame then interpret your footage at the same frame rate you have set your comp too. When the interpretation is the same as the sequence of comp you will see all the original frames.
#1 is almost right. If you interpret the footage at 23.976 it will play back a little faster but it won't look jumpy, and for a lot of action you won't be able to tell because that's not much of a speed change. The flickering fast look you're talking about was caused by uneven frame rates causing inconsistent exposure that you got with handcranked cameras in the early days. Those cameras were cranked anywhere from 10 or 12 frames per second 20. If you tell the app what frame rate the footage was originally shot at it will play back in real time no matter what the frame rate of the comp or sequence you are using.
#2 is wrong because Premiere is not dropping frames, it is repeating some of them. In most cases you will get the same apparent motion that you would get viewing the footage at 18 fps. It would take a very keen eye and a lot of attention paid to individual frames to detect the duplicated frames. EVERY movie you have ever watched on television has duplicated fields and an odd 3:2 interlace rate because when film shot at 24fps is transferred to video tape they interlace the footage and send combinations of 3 fields and 2 fields to your eye so you get duplicated frames. Nobody notices. If you live in a PAL country US films are either sped up to 25 fps when transferred to tape or they just live with the occasional duplicate frame.
#3 is wrong because frame blending only works if you time-remap the footage. Time Warp and Time Remapping that come with AE will do a fairly good job of blending a new frame from two original frames but for the most part with this small of a frame rate change you won't see much improvement in motion. If you wanted to use Timewarp or Timeremapping you would have to interpret the footage at a different frame rate than it is and then change the speed of the footage using the effect to make the footage playback at the right speed. IOW you would interpret your footage at 23.976 fps and then use time remapping to extend the clip to it's original length and turn on frame blending or you would have to interpret the footage as 23.976 fps and then import it into the timeline, then pre-compose the footage and add Time Warp to extend the footage so it would play at the right time. You would be slowing down the footage so if you didn't pre-compose the footage you would not get the entire clip. With this small of a time change it's highly unlikely that you would see any difference in the motion when the clips were played back at full speed. The render time would also be much higher. If you had some fast motion that you wanted to fix with Timeworn you would probably have to turn the vector detail and smoothness way up to get much improvement and some motion would just look very odd on a frame or two. The oddness depends on the motion in the frame and once again, most folks are not going to see any difference with this small of a time change. And Frame Blending is not necessarily inferior to any other method. It all depends on the motion in the frame. You can't judge any of it by looking at a freeze frame. You have to watch the clip in real time and full size on the device intended for your most critical audience.
#4 is wrong for several reasons. First, if you Timewarp your speed depends on the frame rate you have set when interpreting the original footage and the frame rate of the comp. 75% would only work for a specific combination of frame rates.
The only software out there that works in AE that I know of that will do a great job of creating new frames with you change the frame rate of footage is Twixtor. It's expensive and you have to make adjustments based on the motion in the frame, sometimes create masks, and it takes a long time to render. If you use Twixtor you have to know exactly what the frame rate of the original footage is and make sure it's interpreted at that frame rate to maintain sync and have it playback in real time. For your project with 18 fps footage in a 24 (23.976) sequence or comp it is probably a waste of time.
One last point. Unless you absolutely know and understand what you are doing the best frame rate for the smoothest end result with the least chance of motion artifacts for your footage and any motion graphics that you will be adding to the project would be 29.97 fps. Everything that you watch on TV unless you have a cable service that offers HFR playback for sports broadcast and switches frame rates to 23.976 progressive for certain movies and programs that they get from certain producers is broadcast at 29.97 fps and the signal is interlaced. Programs that are shot progressive just have pairs of fields that are the same slice of time. The only way you will ever see a video rendered at a different frame rate is if your display (tv or computer screen) is capable of recognizing the frame rate and changing the refresh rate and frame rate of the display. Most computer monitors in the US run at 60 fps. Newer TV's and displays can run up to 120 or even 240 fps. They start at the top and draw the screen one line at a time so if a frame changes in the middle of the drawing process then the bottom half of the picture will be a different frame than the top, but your eye can't see that because that's not how the human eye and the brain works. IOW. Don't worry so much about it. When you mix frame rates the only part that ever needs fixing is the few frames that really stand out as having problems.
If all you are doing is adding 18fps footage to an edit then all you have to do to make the footage playback in real time is interpret it as 18 fps. Any attempt to create new frames to replace the occasional repeated frames with new ones where the pixel motion is interpreted is going to take a bunch of work and a long time to render and 99% of your audience will not see the difference.
1 person found this helpful
I made this a looooong time ago. still works good! and its 'ol CS3! It tricks timewarp into turning precomped 18fps to 23.976 by 50% timewarp, then 50% timestretch, then places those magic frames back into a precomped 23.976. Took me a while to figure out so you don't have to!
ae cs3 aep
Rick: I'll study your comments in depth today and get back to you. You've almost convinced me that Frame Blending is all I need. But I still want to play around with other methods.
I looked up Time Stretch in the Adobe PDF (Time-stretching and time-remapping in After Effects ). No wonder I have difficulty understanding this stuff. Here's what it says:
For best results when time-remapping a layer, use the Timewarp effect.
So… when trying to explain Time Stretch, the author invokes Time Remapping, but says Timewarp is best. My goodness! I'll start a new thread about this, so no need to comment here.
This thread has strayed a bit from my original question, the answer to which appears to be:
For a tiff sequence, if AE is asked to interpolate extra frames by Timewarping and the new sequence is then rendered out to tiff, then it doesn't matter what frame rate is used – as long as the interpreted frame rate and composition frame rate are the same. All that's happening is that extra tiffs are being generated to suit. So if I want 6 extra tiffs per every 18 (to convert 18 fps footage into 24 fps), I simply set Timewarp to 75%.
Whether the render time or quality or effort required is appropriate, well, that's another matter.
Chris: I downloaded your file, studied it for quite a while, but I don't understand how your method works. For simplicity, let's talk one second of 18 fps footage = 18 frames, and a final 24.000 fps (call it 24) composition. Correct me if I'm wrong:
- 50% Timewarp in an 18fps composition = 36 frames (18 added by interpolation) running for 2 seconds.
- 50% Time Stretch in the same composition = 18 frames running for 1 second. 18 have been deleted.
- Drag those 18 frames in (2) into a 24 fps composition, and AE will add an extra 6 frames by duplication.
I don't understand the technique (don't the two 50 percents cancel?) or the benefit. I hope you can explain.
haha, yes I though you might reply. Ok, so this is what it does...
1. I precomp so that timewarp doesn't cut off any footage. This creates a layered buffer of empty layer data.
2. I timewarp the precomp 50%, this created extra interpolated frames but also has the negative effect of slowing down the footage.
3. I then timestrech 50%, which speeds it back up. (This is the equivalent to whole frames in timewarp) but it just runs faster this way
as to the very strange issue, which you have seen, why in the heck am I not just using 75% timewarp and 75% timestrech? You actually could! because they "cancel" each other out.
and here's the kicker why I use 50%:
ok this is kinda tricky to explain, so here it goes...
its easier to timewarp from a high fps to a low fps because you're just throwing away extra data. This is why high fps footage still looks good when you slow it down, because the movement of pixels has motion data recordered in a very accurate manner.
but what happens when you go from a low fps to a high fps? well, optical interpolation has to work twice or four times as hard to convert 18 fps to 23.976 because there's no motion data. It has to "stretch" those large gaps of data into smoothness.
hence 75% timewarp and 75% COULD work, it would just be a teeny bit jerky, laggy, juddery(whatever you want to call it). and I've found that 50% works well, fast and minimizes excessive tearing of the image. You can also experiment with 40%, 30%, 20% but timewarp will start creating massive artifacts.
If you can get away with 75% and not notice any lag, go for it, chances are, your video will have slightly less tearing, if only fractionally.
but I love smoothness, as that is the whole reason why I do this. Hope this helps. lol, wow long post for something so simple. oh well,
chrisw44157881's reply and the idea of applying time warp and then doing time stretch doesn't accomplish anything and may end up giving you frames that are more of a mess. It demonstrates a lack of understanding of how those effects work and the rendering pipeline.
Let me try and explain. I'll use screenshots. Here's a comp with an 18 fps movie in a 23.976 comp. The movie is 2 seconds long and the comp is 2 seconds long. The times match exactly.
The CTI is set to 1 second. My goal is to render extra frames and fill in the missing motion one way I can do that is to set the interpretation of the footage to match the comp. This makes the footage too short and requires that I do some time stretching, time remapping or time warping. There's a lot easier way to do this that I should have explained the first time. I'll do that now so you don't have to read the rest of the post.
The easy way, and the best way to get unique frames from footage shot at a different frame rate is to simply interpret the footage as it was shot, which in this case is 18 fps and drop it in a comp that is set to the frame rate you want to use for your project. In this case 23.976. To create the new frames you can try turning on frame blending first to see what that will give you. Then you can try pixel motion. Then you can turn both of those off and apply Timewarp. The default settings for time warp for vector detail and smoothing are almost exactly the same as pixel motion so you'll have to increase those settings if you want better results. All you have to do is to set keyframes for source frame instead of using time to get Timewarp to work properly. If you set your timeline to frames instead of timecode it's easy to get the total frame count of your footage. That's all there is to it.
If your intent is to maintain the original speed of the clip and just generate better extra frames this is all that is necessary.
This shows the clip interpreted at the same frame rate as the comp with time stretch and frame blending:
This is the footage interpreted at 18fps with the CTI on the same frame with frame blending turned on. It's exactly the same.
It's time to try pixel motion. This gives me a better interpretation of the new frames I want but it also gives some weird jaggy lines to the moving star:
There is no adjustment for pixel motion. It does a fine job on some kinds of motion but in this case it's not working well.
chrisw44157881's idea of using Time Warp to create a bunch of new frames and then stretching using tine stretch will not give you unique frames without frame blending or pixel motion turned on, will give you the ghosted images you would get with just frame blending turned on, and would introduce the vector artifacts that you get with pixel motion in this scenario. The end result is a zero gain. You're better off and will get faster render times if you just use Timeworn and crank up the vector detail and smoothness as required to give you the best unique frames. As soon as you add TimeStretch or Time Remapping and turn on pixel motion you basically revert to the same thing you get with the default Timeworn settings which will only work well on certain types of motion in the frame.
So now here comes the really mind blowing part. If you work in Premiere Pro, interpret the footage as it is shot, drop the footage in a 23.976 frame sequence and select Speed/Duration Time Interpretation to Optical Flow or Frame Blending you will have new unique frames exactly the same as you get in AE. It will take about 10 seconds and you'll be ready to edit or render your project. The only reason I would ever process a clip like the one you are working with in AE using Timewarp is if there was some motion that just didn't look right any other way when viewed full screen at speed.
So in summary, if you want to maintain real time with your footage make sure the interpretation matches the frame rate. If your comp or your sequence is a different frame rate than your footage then select Frame Blending or Pixel motion in AE or Frame Blending or Optical flow In Premiere Pro using the Speed setting and check a test render at full screen and real time playback. If the motion looks good then you're done. If there are motion artifact problems try Timewarp in AE, or Twixtor if you can afford it, or one of the Borris effects. If none of those work for you then you'll just have to live with the duplicate frames that 90% of your audience won't ever see.
Well Rick, you've just taken away my fun for the night. I was planning to spend a few hours in Premiere, comparing Frame Blending and Timewarp rendered from AE. But you've done the work for me.
I'm convinced. The less processing the better. So I like the idea of Frame Blending, just adding a blended frame when needed. But I needed to learn about, and be convinced, that the more complicated solutions can cause problems. I saw some serious tearing, if that's the word, when I was experimenting with Timewarp. One tear, and that's it, as far as I'm concerned.
Some more questions
You mentioned Pixel Motion. I'm on CS6. The only place I can find Pixel Motion is inside Timewarp. Do later versions of AE have Pixel Motion as a standalone item? I can find Pixel Motion Blur mentioned as a new feature in CC, but not Pixel Motion.
Instead of comparing Frame Blending and Timewarp, I've been comparing no Frame Blending (A) against Frame Blending (B) in Premiere, with one on top of the other in the timeline. It's not looking good, but I guess there's a good reason behind the behaviour. The numbers represent frames, D means a duplicated frame, and • means an Interpolated frame – but I'd rather call it blurred because that's what it is.
A: 1 2 3 D 4 5 6 D 7 8 9 D
B: • • • 3 • • • 6 • • • 9
I'm not at all happy with the look. The result of Frame blending, with only one original frame in four, is to blur the video. Is that normal for Frame Blending?
Is it possible in AE or Premiere, when inserting a 18 fps tiff sequence into a 23.976 timeline, to obtain this exact cadence when Frame Blending (or by some other method)?
1 2 3 • 4 5 6 • 7 8 9 • ........ and so on.
Technically speaking, shouldn't the frame rate for Super 8 be set to 17.982 when placed in a 23.976 timeline? i.e. a factor of 3/4 so that for a long clip the cadence is always 1 2 3 D.
Question 1: Pixel motion is a later addition to the frame blending switch. It's basically the default for Timewarp
Question 2: Frame blending does exactly what it says it's going to do, it blends frames. Depending on the action in the frame you may get a softer look. There are no adjustments for how that works with time stretch or time remapping. The think you are seeing is basically the same thing you get when you transfer any movie shot on film to video. It's 3:2 pulldown. Interlacing the rendered output makes it look a little better but if you were to step through any feature film transferred to video or DVD and look at one frame at a time you would see a blended frame in a uniform cadence. That's how it's done. Don't obsess about it. Most people can't see it when the film is playing back at full speed. Here's the important part. You can not, let me emphasize that again, you cannot judge the quality of a video by looking at individual frames one at a time. You have to judge the video playing back full screen and full speed on the device your critical audience will be using to view the movie.
Question 3: If you want the cadence you just turn off frame blending. You will get pairs of duplicate frames. Because you are going from 18 to 23.976 instead of 24 occasionally the cadence will shift by one frame. If full frame real time playback with frame blending on os annoying then turn it off. If the cadence is annoying then you have to build new frames and interpret the pixel motion. Timewarp is fairly good at some shots but it's vector interpretation where it looks at the movement of every pixel on the screen and tries to predict the path of the pixel is very limited. It mostly predicts straight lines so hard edges, like the points of the rotating star, tend to get all fouled up. Once again, you can not judge the quality by stepping through frames one at a time.
Question 4. You are correct. When a telecine is used to transfer film to video the film runs at 23.976 fps and it is scanned a field at a time 59.94 fields and 29.97 frames per second. You end up with pairs of duplicate fields in 2 frames then a frame composed of one field from one frame and the second from another, then more frames made from identical fields. If you look at each frame one at a time the odd frame made up of a field from one frame and a field from another can look very bad but almost no one can see it when it is played back at speed.
If you want to try the cadence idea and you are set on 24 fps for your project then I would use the Standard 24 comp or sequence settings in AE or Premiere pro and not worry at all about 23.976. That frame rate was designed when the camera world was going crazy trying to give us video that looked like film on NTSC systems that would only playback at 29.97 fps. It is now almost completely a non issue. You can render interlaced for the web, most broadcasters and cable companies only send 29.97 fps interlaced content to their customers. All broadcast video is 29.97 interlaced. It only looks progressive if the fields are matching slices of time. You have to have a very new TV and a cable provider that can deliver HFR content to see anything at any other frame rate on TV. Only streaming supports multiple frame rates and you should follow the standards there. There will be no sync or pitch problems if you slow change from 23.976 to 24 fps unless you are an audio savant with absolutely perfect pitch and an F# that is microscopically out of perfect tune will drive you crazy.
If you do choose to work at 24 fps for your projects and shoot at 24 fps then you must pay attention to critical panning speeds and be much more careful about moving the camera, your choice of lenses, the way you handle your motion graphics and be prepared to live with the occasional shot that just doesn't look right no matter what you do. 29.97 or 30 will give you much smoother motion and make your life easier. Making the movie look like it was shot on film, which almost nobody does any more, is more a matter of color grading than anything else. I've been making movies for a living since 1969, have shot on every film size from 8mm to Imax and 24 fps requires a much higher level of skill to make every shot work than 30. Almost every documentary and commercial intended for broadcast that I shot on film from 1973 to 2001 was shot with the camera running at 29.97 fps. They all looked like they were shot on film because that's the only thing an Arriflex could use to make images at that time.
Your way is technically correct, and wow, I think everyone appreciates the time you take to post pictures, and adobe did intend everything to be easy and straightforward. I was also going to post the normal way, but you beat me to it.
Back in CS3, there was no frame blending-mix pixel motion, just frame mix. And twixtor got all the laughs because we all had to relent and buy the plugin. I didn't give up and made this, as you see here. It has held up well over the years being used for all sorts of stuff.
Again, frame blending Pixel motion is really great because it’s so simple. And now that Premiere has it, well I don't even really need AE now.
Did you know that you can use mattes with timewarp, just like twixtor? Yes, you can! They are masks to help/guide/isolate movement.
Also, the Foundry, creators of Kronos had to tell a little secret. Interlaced footage doesn't work well with pixel motion because according to them, "you have to do a little jiggery, pokery to get interlaced frames to play right. For example, a 29.97i has to be dropped into a 59.94 fps comp. This makes every field into its own frame. Now you could truly optical warp with much higher quality.
And where are the extreme settings in pixel motion? Nope, only exists in timewarp plugin. This makes footage a lot less blurry.
So, again, my way isn't in the new adobe manual, but you can still find the Foundry's old Kronos manual which backs up what I'm saying.
I don’t want to set keyframes for source frame as the limit is 5,000 frames! So again, thanks but NO thanks. that's just timeremapping stuff anyways. 2 steps back.
I have full access to masks, filters, global compensation, none of which can be used in frame blending-pixel motion. The other way is a lot like driving a car without a steering wheel with a 5,000 frame limit!
and please, next time be considerate and attack the argument not the person.
Thanks Rick and Chris. You've both offered beneficial input.
I'm probably going to use Frame Blending in Premiere, so I've started a new thread over there: Frame Blending: possible to interpolate only the added frame?