This content has been marked as final. Show 11 replies
I would stick with the more standard sampling rate of 48k, Steve. I don't know how Premiere Elements would perform with those more intensive sampling rates.
But, as far as I know, it's not the sampling rates that lead to out of sync problems. It's usually the result of a non-DV source (like an MPEG) being used in a DV project.
What type of camcorder are you shooting with? If it's a miniDV and you're capturing over Firewire, you should have no audio sync issues at all.
I'm shooting with a minidv/dvcam camera. I am not necessarily having problems, yet. I am trying to anticipate issues so that I don't ruin a shoot.
I have seen some claims, on the net, that if one is doing a two-system recording, that the audio should be sampled at 48khz, since rate is normally used with DVD. If that claim is, in fact, correct, is part of what I'm trying to determine. The other part of what I'm trying to understand, is if I sample audio separately at a very high rate and then downsample to 48khz, will I have sync problems at some point in the merged video and audio clip, even if it doesn't appear at the beginning of the clip.
Again, I don't know if sampling rates will affect the program's performance -- but I also don't see that you'll get significantly better audio by recording at more than 48k because that's probably all the program is going to output anyway.
A miniDV, with your files captured over FireWire, is the ideal workflow for this program. You should have no problems using it.
The reason for sampling at greater than 48k is for better audio post-processing. At least, that is my understanding. Please correct me if this is a misunderstanding. The idea is that after all of the post processing, the audio is downsampled to 48k and then merged with the video clip.
You might want to do a test run with a short clip or two, Steve.
This is a consumer program -- not a professional app. So, if you're planning to put any non-standard video or audio into it, it's best to make sure it's going to work before you sink too many hours' work into a project.
<Steve_C@adobeforums.com> wrote in message <br />news:firstname.lastname@example.orgNXanI...<br /><br />>I have read that, in order to sync separate audio and video files and for <br />>them to remain in sync, the audio must be sampled at 48 kHz. Is this <br />>correct?<br /><br />No.<br /><br />I just took a 24/96 audio file and put it into the media list for an <br />existing APE4 project.<br /><br />I then placed that audio file on the project's time line.<br /><br />It behaved like any other supported file type. I was able to edit and <br />monitor and apply effects to it at will. This could include hand-synching it <br />to the video, were I so inclined.<br /><br />> If so, does the following work just as well?<br /><br />> I plan to record audio with a sample rate of either 96 kHz or 192 kHz.<br /><br />Been there, done that. This was where I obtained my 24/96 test sample - it <br />was one of a number of 24/96 recordings that I made. I even went to the <br />trouble to ensure that all of the microphones and recording equipment that I <br />used were competent performers at > 40 KHz.<br /><br />> Perform post-processing in Adobe Audition and then down sample to 48 kHz <br />> prior to importing the file into PE.<br /><br />Unnecessary. However, having the audio file already in 48 bit format might <br />speed rendering for export a bit. Or not.<br /><br />> Once I import the audio into PE, it is possible that I might trim either <br />> it and/or the video after syncing them.<br /><br />The larger question is whether or not there is any sound quality <br />justification for going to the trouble to make the 24/96 recording in the <br />first place.<br /><br />This is still a controversial topic, and it has been investigated by many <br />people, using both informal procedures and rigorous procedures. At this time <br />we find that scientific analysis and rigorous test procedures (e.g. careful <br />blind tests) say so-called hi-rez formats make no audible difference.<br /><br />Some informal evaluation procedures and media pundits lead other people to <br />believe that hi-rez formats do make a difference.<br /><br />In recent years two so-called hi-rez audio formats for pre-recorded media <br />have been market-tested. Both have essentially failed to gain adequate <br />commercial support in the worldwide mainstream marketplace for pre-recorded <br />media.
I agree with Arny. The trauma required to work on more system stressing, drive-space-hogging high resolution audio files is unlikely to be rewarded by an audible improvement in the quality of the result - particularly when replayed through typical TV systems. And I speak as a classical music recording engineer.
The key to two signals staying in sync is that the clocks doing the sampling of each signal have to be very precise. If possible, the same clock should be used. Otherwise, even if you start in sync the two will gradually get out of sync due to the different clocks.
I see no reason why using 96kHz sample rate would help the sync issue.
In fact, the signal has to be filtered to less than 24kHz before downsampling to 48kHz sample rate, and if the filter introduces delay (possibly as a function of frequency) you will be out of sync already.
Thank you to Steve, Arny, Peter, and Ozpeter for your suggestions.
I have been reading a book by Jay Rose - "Producing Great Sound for Film and Video". If any of you have thoughts or corrections on the following, I would appreciate hearing from you.
Rose states "Once you've decided on the hightest frequency your project will need to support, double it (to allow for the Nyquist limit), add a safety margin, and choose the closest standard rate." So, if I understand correctly, one should ideally record audio at 96khz if the audio is going to be combined with video.
I'm not sure that I hear a difference at 96khz, apart from a possibly noiser recording. However, if I understand Rose correctly, recording at a higher sample rate than what the final playback media requires, will reduce the distortion that can occur during playback.
>Rose states "Once you've decided on the hightest frequency your project will need to support, double it (to allow for the Nyquist limit), add a safety margin, and choose the closest standard rate." So, if I understand correctly, one should ideally record audio at 96khz if the audio is going to be combined with video.
No. Don't confuse highest frequency with the sample rate.
48 kHz sample rate supports frequencies up to 24 kHz, which is likely much higher than you or most people can hear (dogs and bats excepted :)). Only purists with top equipment and golden ears need use 96 kHz sample rate. If you want 48 kHz sample rate ultimately, record with 48kHz at the outset and save the effort and slight (but probably negligable) degradation from the processing for sample rate change.
P.S. If you planned to do much processing of the audio, you might consider recording with 24 bits per sample and convert to 16 bits after processing. This would help keep quantising noise low.
Peter, Thanks again!