I've only read a few Q&As about this trouble, and none have been posted since 2011, so I'm seeking help.
I'm trying to use Time Warp for slow motion on a clip (taken on a Panasonic GH2, HD 1080 24p, AVCHD, .mts file).
I add a 4 second clip to the layer that's been trimmed from a clip of, say 30 seconds in total. I only want the 4-second clip to be slow motion.
I pre-compose the 4 second clip and extend that comp to 8 seconds. I apply Time Warp, and the clip has been slowed down by 50% (default speed), but the clip remains only 4 seconds long. And, unlike others who have had this problem, there is no way to pull the clip to the right to see the rest of the slow-down footage.
And here's another even more interesting case I had when trying this. After applying Time Warp, the clip was changed to the beginning of the clip, not the 4-second section that I had trimmed it to.
I've seen the advice that I should export the files in another format (Animation, etc.) that is more "friendly" to Time Warp. Which I've done, but seems so labor-intensive when dealing with a video project comprised of 50-70 clips, about 1/3 of which I'd like to add slow mo.
I've also seen advice to enable Time Remapping on the pre-composed layer, but that did nothing for me.
And I've also seen the advice to "loop" the footage as many times as you are adding time (2 for 200, 3 for 300).
So, my questions are:
1) Is there new information on using Time Warp? Some button or workflow that accurately makes clips slow motion. It's very frustrating to export clips from Premiere into a new format, then bring that into AE then export that clip and re-import it into Premiere. Ugh!
2) Could this be something to do with timecode? It seems to me that Time Warp simply goes to the start of the clip, rather than keeping the clip trimmed at the points that an editor wants.
3) Changing the layer's speed option: Doesn't seem to produce as smooth and silky slo motions as Time Warp. True? (I think so).
Thanks for any advice/thoughts that can be shared.
If you don't want to hassle with transcoding AVCHD footage, than don't shoot AVCHD. AVC Intra would be a good choice.
AE can handle long-gop footage like AVCHD, but it takes forever to work with when fooling around with time: it has to reinvent the missing frame information inherent in long-gop codecs.
You may also want to try Time Remapping rather than your current technique.
Thanks for the advice. Have to shoot in AVCHD for now -- we own Panasonic GH2s and can't/couldn't/won't replace this equipment for quite some time. I have yet to plough through the documentation for the AVC-Intra hack from Vitaly. The hack's massive bit rate and data card needs seem out of our league in terms of some of our long (continuous sometimes up to an hour) shoots.
And so far, we love our GH2s for our applications (documentary work, wedding videography and small business web videos).
FYI: I've used Time Remapping and find it as lacking as Speed changes in Premiere Pro. Time Warp is head and shoulders so much smoother, richer, cleaner.
My workaround is, well, working great so far. Yeah, bugs me to have to transcode the clip, but for the amount of slow motion that I'm using (maybe 10 3-4 second clips per 3-minute project), it's a step I'll take to get the results.
I LOVE the idea of batch processing, but am not sure how it would work in my workflow.
I'm primarily editing in Premiere Pro and using AE to apply a few strategic effects (stabilization and slow motion) to a few clips.
What I do now is
1) export a particular clip out of Premiere Pro (using Encoder/Queue)
2) Import clip into AE, apply my effects, double check that they look they way I want, make adjustments
3) Export out of AE
4) Re-import into Premiere Pro and drop onto my timeline to replace the original clip.
Normally, I use the "replace with After Effects comp" and the Dynamic Link option in Premiere Pro and have no trouble toggling back and forth between Premiere Pro and AE w/out having to export clips.
However, in this particular case, I'm using AE to fix clips from a Multi-Camera (four clips) timeline, and the "replace with After Effects comp" doesn't seem to handle this kind of timeline very well. (The clip comes into AE as a pre-comp with four layers of clips -- from the Multi-Cam edit. I've tried several different ways to isolate the actual clip I want to use from the layers -- hiding layers, deleting layers, etc. -- and the programs freak out. Probably too complex to go back and forth between the Dynamic Link (which normally works like a charm for me.) Sorry if that's a long explanation of this particular workflow.
Anyway, if you have any suggestions to simplify what I'm doing -- given the complexities of this project -- I am ALL ears.
I often work on 30 minute to 60 minute (wedding ceremonies) in a Multi-Cam edit. I do try to color grade and do "overall" adjustments to the full-length clips BEFORE I begin the Multi-Cam edit, but I always find that I do have to make some subtle adjustments after the Multi-Cam edit, and that seems to throw things into a tailspin. AND that we shoot with GH2s that produce AVCHD footage and don't interact well with AE's Time Warp. Gah!
Anyway, to my original, original problem-- Applying Time Warp to AVCHD footage -- I have found this process to be the fastest, easiest (But it does require a lot of exporting/importing between programs):
HOW TO APPLY AE's TIME WARP TO AVCHD CLIPS
Of course, you should triple or quadruple the length of the comp if you are slowing the speed below 50%. (You can always shorten the duration once you've got the slomo the speed that you want it.
It seems like the best way is just to transcode everything before you start editing and that way you would have total freedom over which clips you want to add timewarp to. Normally I try and avoid transcoding and you really shouldn't have to transcode if you're just working in Premiere. But, if you know you're going to move in between programs, and in this case use AE timewarp later on in your workflow with AVCHD clips, it's probably just best practice to convert everything to a production codec from the beginning. That's where batch processing would come in handy.
Agreed! But I have to admit to not having researched enough to know which production codec I'd like to transcode to. Perhaps that's fodder for another discussion for AVCHD users of AE, such as myself.
Thanks for your thoughts!
At first I thought I'm suffering from severe case of déjà vu...
Well, regard to Time Warp (and Twixtor as well), the effect works as you noticed, i.e. starts calculation from the source footage beginning, which results in what you see when a layer is made out of trimmed footage. You've already found a workaround with adjustment layer. Another one is, as Mylenium mentioned several times, to pre-compose that layer and apply the effect onto a pre-composition within a master comp (make sure that the beginning of the pre-composition and the In point of the trimmed layer within it are coincide). Then you can either extend the duration of the pre-comp as needed or simply enable Time Remapping.
Regard to multicam workflow, so as to get proper camera angle back you need just to open up your nested comp and disable eyeballs of upper layers until you get correct view (do not delete anything). See this thread in PrPro Forum.
...it's probably just best practice to convert everything to a production codec from the beginning...
I would appreciate if someone could explain me why it is a best practice.
Since PrPro and AE can handle AVCHD clips natively, allowing to place them into 32 bit colour space, the only reason I could see is a transcoding software can interpret colour data and invent frames better than After Effects or PrPro. Is it really so?
Transcodeing is a best practice if you up the color space to at least 10 bit and build real frames for every frame. AVCHD codec is a GOP compression scheme that works the processor very hard when decoding. PPro does a good job of it with the GPU acceleration, but you'll get better results in the long run working most of your post production using a production codec.
The easiest way to see th difference is to pull a key using AVCHD footage and then pull the same key using transcoded AVCHD footage using a good 10 bit or higher production coded. Depending on the shot, there can be a significant difference in quality.
If it's just home movies or messing around I wouldn't bother. If it's for national or international distribution you'd better document every stop of the production process and make sure that the distributors accept your production pipeline.
I did try the pre-compose within a master comp option and had trouble with the in/out points. Perhaps I didn't ensure that the beginning of the pre-comp and the in point of the trimmed layer matched. But I was pretty careful in setting that trail up, and would be surprised if I had somehow screwed those up. However, I'll have to give it another shot trying that.
FYI on the multi-cam edit: I did, indeed, trying turning off the eyeballs on the layers that I didn't want, and Time Warp still wigged out. One attempt, it put the effect on the top-most layer -- even though it was "hidden"; a second time, the in/out points were completely off.
I'm pretty quick to switch tactics if I don't get consistent results. Can you tell?
Thanks for your thoughts. My work, for now ( ) is not for broadcast, but primarily for wedding videography. I am, however, working on a documentary that I do hope will someday see see the broadcast light of day. But that's another story.
Can you recommend a good AVCHD production codec for production to DVD (Blu-ray)?
Also: I'm working on a PC, and I know people have complained about working with AVCHD and Adobe's Premiere Pro/After Effects on a Mac, but I don't see those kinds of complaints on a PC.
Other than the Time Warp issue, I've had zero problems working with native AVCHD on a PC with these editing programs.
Thanks for your time and thoughts,
I'd be interested in the answer to Fuzzy's question about the best practice of transcoding. I love not having to transcode my AVCHD footage, and as stated above have no trouble on my super-powered, RAM-heavy, GPU-accelerated PC with PPro and AE.
If it's for national or international distribution...
Well, once I saw how my underwater shots for a TV show were colour corrected by our national TV Channel... I'd rather did it by myself (for free)...
...Time Warp still wigged out. One attempt, it put the effect on the top-most layer -- even though it was "hidden"; a second time, the in/out points were completely off...
As I mentioned earlier, you're facing the issue when the beginning of a pre-comp, on which you're trying to apply Time Warp, doesn't match the In point of a subclip within multicam workflow. And the easiest way to correct that is the same workaround you've found, i.e. insert adjustment layer above the pre-comp and apply Time Warp on it. In that case you don't need to undertake any extra steps such as rebuilding Dynamik Link (which you have to do if you pre-compose dynamically linked master comp).
Basically, whenever I know I'm going to move in-between a bunch of different programs and I'm not entirely sure what effects will be applied to what footage, I transcode everything to a production codec.
The question was not what one does, but why.
Expect an explanation from technical point of view.
And we're talking about neither hardware capabilities, nor moving footages somewhere outside the Suite.
Fuzzy and BM:
I do, believe, I'll have to do some serious research on production codecs that work best transcoding AVCHD that work more seamlessly between PPro and AE.
Thanks for all your suggestions.
I would appreciate if you could share the results of your research.
Currently I'm not aware of any thing that would prevent seamless workflow between PrPro and AE while processing AVCHD natively (yes, I'm aware of some bugs in CS6 related to processing spanned AVCHD clips etc.).
Sorry if this discussion sounds off-topic for you.
Here Rick Gerard gives a good explanation about the reasons for using different types of codecs:
There are two types of codecs used to in production. The first is a production codec. This would be something like Uncompressed 10bit QT, ProRes 422 (HQ), ProRes 444, Animation, or even JPEG 2000, at the highest quality settings. To be classified as a production codec the rendered files must be lossless, or nearly lossless. Production codecs take up a lot of DriveSpace. Some of them will not even play back in real-time on any system. Many production houses use image sequences as their standard production format. You can render, remember, and then render again from into production codecs with no loss in quality. All you need is enough storage space to hold the files. Every production codec that I use is at least 10 bit color depth.
The second type of codek is called a delivery codec. These are highly compressed, playback in real time on inexpensive systems, and should never, I repeat never be used in the production pipeline. Anything in the MPEG class of codecs is a delivery codec and should not be used in the production pipeline.
Come to think of it, there is a third type of video codec. These are acquisition codecs. They are the codecs used by the various camera manufactures to record video. AVCHD, is a very low data rate codec used by consumer cameras. Other cameras from the consumer level to the pro level may use various forms of MPEG encoding. These "acquisition codecs" are not suitable for use downstream in the production pipeline at all. What I mean by that is, you should never render some original footage back to the original acquisition codec. The only exception to this rule are professional cameras and recording systems that record in lossless or raw formats right from the start.
Also as Dave said earlier:
AE can handle long-gop footage like AVCHD, but it takes forever to work with when fooling around with time: it has to reinvent the missing frame information inherent in long-gop codecs.
So, the main thing that would prevent someone from working in between AE and PrPro with AVCHD footage is basically render time. Of course it takes time to transcode, but if you're unsure which effects you're gonna add Timewarp to from the beginning, it seems like it would make sense to spend the time transcoding from the very start. That way you would have the freedom to use any effect on any clip.
Sorry Ben, I'm afraid you don't follow the question.
Let's try again:
- we recorded AVCHD clips
- AE and PrPro can handle AVCHD clips natively, interpreting recorded data and placing them into 32 bit colour space (i.e., once we imported 8 bit footage e.g. into AE and set our project to 32 bit, we are not restricted to 8 bits anymore; we are not working inside of a media file directly within a narrow space of its original container, we are working on a new video stream, generated by the software)
So, in which way and why, apart from CPU load (bear in mind, we are not discussing hardware capabilities here; and sorry, modern computers are capable enough to provide a bit better performance than processing something forever...), transcoding footages, which were originally recorded with lossy codec, may improve the output since what was lost in the first place can not be restored?
Does a transcoding software interpret compressed data better than AE or PrPro?
Can AE or PrPro interpretation and restoration compressed frames be inconsistent in time?
Regard to saving on render time, yes that might be applicable in some circumstances (e.g. when you have to export your PrPro timeline dozen of times prior to final approval). But again, that does not relate to the original question about transcoding as a best practice. Best practice here (like everywhere) is to plan the workflow properly. If one doesn't need to present a client interim results and exports PrPro timeline just once for final movie, he could hardly save on time.
Regard to CJ original issues with the Timewarp, it has nothing to do with the codec at all.
Try to apply the effect onto a footage in any other codec, and you'll end up with completely the same behaviour.
The short answer is yes, it is always better to do heavy production processing With production a production codec.
Keying, time warp, anything that requires reinterpreting pixels or changing pixel values works better in an uncompressed or lossless codec with a higher bit depth than you're highly compressed camera original.
I've run tests, done comparisons, and it's always better.
Rick, if you're addressing me, then 'the short answer' is not what I was asking for.
I was asking for detailed explanation why, which unfortunately has not been presented so far.
Regard to running tests and doing comparison, that's what I'm constantly doing - this is the integral part of long life learning (which everyone should follow, if one bothers of self-developing). And I didn't notice yet any advantages of processing footages converted with production codec out of highly compressed sources compared to processing those sources natively in 32 bit colour space.
OK. Here is the green screen dedicated webpage, from where one can download test plates as PNG sequences. Which one should I download, with which settings compress to H.264 and then transcode that file to e.g. TIFF or TGA intermediate sequence to see the difference?
Let's look at a test I did.
Take footage from a compressed source. A DSLR, or in this case, an iPhone. Run it through Magic Bullet Grinder (or anything else you can use to transcode) and transcode to ProRez 444 or your other favorite production codec. Now set up a project and make some color adjustments. I used Colorista II. Render a frame as a 32 bit Photoshop File. Now do nothing to the project but replace the transcoded footage with the camera original and render another frame. The frames should be identical in every way.
To test that assumption open both files and photoshop. Add one of the files to the other as a new layer. Set the blend mode of the top layer to Difference and look for artifacts. There will be a bunch. They may be hard to see by looking at the frames side by side, but they are there. Here's a frame from my I phone. The top image is a rendered frame from the Transcoded footage. Below is the rendered frame from the transcoded footage in Difference mode over the rendered frame from the original footage.
Those red edges are differences between the frames.
Here's the view at 100%. It's easy to see the edges that have softened and spread out with just a bit of color correction.
Here's a composite of the same two frames, Transcoded on top, original on the bottom with my color adjustments turned off. There are no visible edges proving that the pixel values of the transcoded footage and the original footage are identical before processing.
So there's your long explanation. I've just demonstrated that if you transcode to a production codec from a compressed codec and you do nothing to the original pixel values but, if you manipulate the footage in any way, you can measure the loss in quality when exactly the same process is applied the footage.
Compare the render times and you'll also see a difference. Production codecs almost always render faster than compressed formats. Transcoding also minimizes the problems caused by artifacts caused by GOP compression because your have discrete frames to work on one at a time instead of software interpolated frames . This makes a big difference if you're changing the speed of a clip. The more you process the footage, the farther it falls apart. Transcoding keeps it together much longer.
If you want to use camera original be my guest. I'ts plenty good for home movies, experiments, or single step processing. If you're getting paid for your work then you need to do everything in your power to preserve quality, and deliver the best quality image you can. I can't afford the time it takes to fix a problem that could have been solved in the very first step of post production.
BTW, there was an excellent (blush, I was quoted) explanation of why given by Ben Markus just a few frames above this reply.
This forum isn't designed for us to write white papers on production techniques. It's a place to give advice and try to help. Lucky for you I happened to be devoting a bunch of time this Saturday to working on my articles so I took the time to write a bit about production techniques in this reply. If you want the full white paper you'll have to wait until I publish. More of what you see above will be included.
Rick, I'm sorry, I don't see an explanation in Ben's quote from you.
What I see is some codec classification with no a single word of what happens and why when it comes to processing data of a highly compressed footage and a lossless one (or about lossless) in 32 bit colour space.
And yes, I do not understand, why a transcoded footage can give more room for adjustment, if we are not iside of this footage, but outside. We already opened a can and poured a liquid into a pool...
Anyway, here is my morning exercises (hope, you don't mind my living in PC world, therefore TGA sequence instead of ProRes). I would appreciate if you had time to have a look at and comment on.
In my understanding, visible artifacts in linearised colour space, which are visible from the very beginning prior to any grading, are caused by errors in floating point calculations during transcoding (as an example, set an object opacity in ActionScript to 1 or 0 and dabble with it in a loop descreasing or increasing by 0.1 - you hardly ever get precise value you expect)... At least, I don't have any other reasonable explanation so far...
I took a look. The linerize color space option is producing the difference because the colorspace of TGA is different than the color space of your AVCHD. This is expected.
I was surprised to see how little difference there was in your demo with the two source files. Most of the time it's very easy to see when you start doing color grading. In my testing I've always seen more difference in the rendered output than inside AE. Rendering from a production codec to a production codec produces consistent results. Did you compare the rendered footage? Rendering from a compressed source to a production codec can be and usually is inconsistent.
As I said before, your workflow is up to you. I didn't see any flaws in your testing methodology. You didn't do any significant color grading, and the keying was fairly simple. If it's working for you, it's fine. If your project was headed for a broadcaster with a specific set of production guidelines then you need to follow them and have documented proof of your workflow or they won't accept the product no matter how good it looks.
The reason I ask for the clarification is not to argue against the necessity of following a broadcaster's specific requirements or just declare that from where I'm standing my workflow is correct. The reason is to gain an understanding of WHY something is going on.
Yes, I compared rendered TGA sequences.
I applied some more colour grading for the sake of applying some more colour grading, so the final result now looks something like this:
Here is the difference between exported TGA sequences:
Here is the difference between exported TGA sequence out of the composition with original AVCHD footage and the very composition with original AVCHD footage in linearised 32 colour space:
Here is the difference between exported TGA sequence out of the composition with transcoded TGA sequence and the very composition transcoded TGA in linearised 32 colour space:
Here is the difference between differences:
In non-linearised 32 colour space there is no visible difference in all cases.
By the way, in my understanding the difference between original AVCHD file and transcoded TGA sequence in linearised colour space is not the result of the differences in their own colour spaces since we are not in there. As in an example with a programming language, we assigned the value of integer iAVCHD variable to the numeric one - nAVCHD, converted string sTGA variable to the numeric nTGA as well and are now comparing nAVCHD and nTGA, rather than iAVCHD and sTGA. Linearised 32 colour space just provides higher precision in comparison. That's why we can see the difference between exported TGA sequences in linearised colour space, which is gone right after non-linearising.
Maybe I'm wrong. Always looking for sharing skills.
I followed all your discussion here. I have the same problem. I have a multicam project on my timeline from a Prosumer camcorder.
Since it is a project from an old project (it was trancoded to .mov Prores LT with FCP) I used only transcoded files. And AE still creates all the problems with the Time Warp effect. I cannot find a workable solution with multicam clips and time Warp. So please share if there is a better way than ceggertjohnson using.
Regarding the AVCHD question, just my 2 cents;
Since I moved to Premieres CS5.5 from FCP7 for the new projects I only used native AVCHD support (on Mac). If you had a CUDA enabled card, it would be even a better experience, I do not have and still better than trancoding. For me at least. I used to keep my trancoded files earlier on an external drive, but it was so huge, that I was not able to buy new drives every month. So I basically finished with transcodings.
I still deliver it for several European TV stations, and they do not complain about my workflow, but they accept my final product over the internet as is. Also I shoot interlaced because that is what they request even if it sounds strange.
Though for films for theatres, for a huge budget I would not mind having a Raw 4:4:4 file, for better effecting, keying purposes. But for the stuff I do, it is more than enough and I guess there are thousand of us who purchased a fairly good camcorder with this AVCHD file sytem, which is only 4:2:0. If you shoot bat man movies, you are not going to shoot with this type of camcorder anyway.
So, we are still looking for the solution with FILES to be sent from a multicam sequence to AE......
As I pointed earlier, applying Time Warp effect onto adjustment layer is the easiest and most elegant solution here.
Not sure I understand for which other issue with multicam you're still looking for a workaround...
Thanks, I tired that but did not work. I might do someting differnetly.
I still do not get it. I will try again though in the next minutes. Once I apply the effect, whatever I do it starts from the beginning of the video. I have a soccer game which is 45 minutes long. So I have a little part on my Premiere timline and I want to send it to Ae. Even if I need to make the clip (composition) longer, it still applies the ffect to the whole 45 minutes file.
Hi Ben - Late to the party here, but I find the info in this thread quite helpful. I also have a Panasonic GH2 (unhacked) shooting AVCHD files. I want to try transcoding to a "production codec", but I don't have FCP. If I purchase Apple Compressor, will this enable me to transcode to ProRes? Or, should I be using something else (ProRes or another codec)?
FWIW, I am using CS6 Master Suite on a 6-core MacPro with 32GB RAM. AVCHD footage playback is limited in AE at full resolution, but playback in Premiere Pro is very good. PP playback slows down if I add effects.
Unfortunately, you can't get Apple ProRes codecs without buying Final Cut Pro. I'm not sure if Apple Compressor has them, but if you have the CS6 Master Suite then there should be plenty of production codecs to choose from. I can suggest Quicktime Uncompressed 10bit and Quicktime Animation. However, both of those codecs are gonna give you huge files that won't necessarily playback off the desktop. I've also heard Rick Gerard and Dave La Ronde say good things about Quicktime PNG and JPEG 2000. Another thing to consider is using TIFF or PNG Image sequences. I wouldn't buy ProRes codecs unless you absolutely need them for a specific job. Just use Adobe Media Encoder to test out the above options. Certain codecs tend to render color slightly differently so you might want to do some tests. I actually have some here from another thread that you can look at:
Ben - Thanks for the prompt reply and comparison test images. I've read people referring to ProRes many times, so I just assumed that was a common requirement.
Its good to know I can look at other codecs, but I also wasn't aware of possible color shifts when transcoding. Is there a reason you did your tests with a single solid red color instead of a color chart? What is the "original" file in this test?
Two more pesky questions: How do I know which of the codecs available in Adobe Media Encoder are 10 bit?
Do any of these provide 4:4:4 or 4:4:2 transcoding of the AVCHD files from a Pansonic GH2?
I am still new to editing video, so pardon my neophyte level of knowledge.
This was from another thread and I think the original poster was asking specifically about differences in red. These tests were created using a 1920x1080 comp in After Effects using the same red solid. If you'd like, feel to run tests using a color chart.
Uncompressed, DNxHD, JPEG2000 and PNG Quicktime are all 10-bit. I believe Cineform is 10-bit and-12 bit
For ProRes check the Apple Documentation for more detailed information
Animation is 8-bit and creates very large files that might not even play back on very fast systems. For this reason I wouldn't recommend it over the others.
However, any of these codecs should be a fine for transcoding your AVCHD files. They each have their own advantages and disadvantages so it might be worthwhile to read up on each one.
Ben - Once again, thanks for the reply and detailed info.
As the original poster of this thread was inquiring about time warping, I am wondering if transcoding will also help with motion tracking (CS6 AE's camera tracker and AE mocca). As transcoding creates individual frames, will these tend to be more accurate, or improve the processing speed of motion tracking?
If you were following this discussion, you'd find the description of some test techniques, which allow anyone to compare and contrast the 'native vs transcoded' quality. Run your own tests. General answer for Adobe Suite is, 'No, you can't get any advantages out of transcoding in terms of quality'.
What is more, you lose some quality while transcoding. In particular, AVCHD footages can store surprisingly a lot of 'over range' data in Red channel (overbrights or superwhites areas), which will be clipped on transcoding - in my 'morning exercises' those artifacts visible in linearised colour space between original footage and transcoded TGA sequence prior to any manipulation is a result of this clipping.
Yes, you can save on render time.
Regard to production codecs check out this thread as well.