Thanks, I must have completely overlooked this. Does anybody know whether Adobe plans to support higher-resolution audio sampling rates in the future (I've been using Audition for years -- back when it was still Cool Edit Pro, actually -- so I am a bit surprised that Premiere Pro doesn't support 192kHz since Audition/CEP has for many years)?
Unfortunately, Adobe is typically very mum about future plans.
Out of curiosity, what is your intended delviery format for the higher sampling rate Audio?
Blu-Ray can handle audio content up to 192kHz/24-bit (technically, the DVD-Audio spec allows for 192kHz/24-bit stereo audio as well). From what I'm seeing, though, it looks like Blu-Ray may not (yet) have the data transfer speed to accomodate a video stream along with the 192kHz audio stream.
Personally I find it overdone, but 640 Kbps is perfectly acceptable.
You know that 192KHz is WAY overkill. Not only is it a waste of HDD space but also of CPU processing. It's just another marketing tool used by manufactures to keep us buying expensive equipment. These are the reasons why:
The Nyquist–Shannon Sampling Theorem states that an analog signal can be perfectly reconstructed if the sampling rate exceeds 2x the highest frequency of the original signal (otherwise you get aliasing or noise). This means that if the sound you wanna capture has a max frequency of 20HKz (which is at the limit of human hearing), then a sampling rate of 40KHz would faithfully capture such sound. But engineers decided to make 44.1KHz the CD standard as a way of adding extra "cushion".
At 192KHz, you're capturing sounds way above human hearing (as high as 96KHz!!!) Who's gonna hear that? Some say that recording that high is pleasing to the ear because the higher frequencies affect the lower (hearable) ones in a positive way. But let me assure you that you're already recording at an even higher frequency than 192KHz. Modern converters capture sound at around 5MHz - 7MHz (yes Megahertz, not Kilohertz), but for other reasons. This is done to circumvent a problem created by the anti-aliasing filter built into the converters. I'm not too savvy about this, but my understanding is that the filters aren't steep enough and end up cutting more signal then they're supposed to if not given enough room. Steep analog filters are too expensive and thus a digital one is implemented into the converter to get the job done. So, basically, your converter captures the sound at a super high sampling rate and then brings it back down to the target sampling rate in the output stage (before going to your HDD of course).
So, as you see, even 44.1KHz is enough to capture any sound we hear if you have good converters with good filters. On the other hand, bad converters may sound better at higher sampling rates because they have bad filters. Pushing those filters away from the hearing range makes them sound better. In that case, then I do understand going over and using 88.2KHz or even 96KHz for your recordings. But 192KHz is overboard and won't give you ANY benefits. Like I said, you'll strain both your HDD and your CPU with large files and no sound quality improvements.
And let's not forget that in order to capture those super human frequencies, you need to have equipment rated to capture them (i.e. converters, microphones, preamp, monitors, cables, etc), which tend to be VERY expensive. It's all snake oil to get you into buying the next best thing. And what's crazy is that people will hear a difference even though there are tests that prove otherwise. The power of the mind will make you believe anything you want to, which is why it's hard to prove things involving our senses. Thus, those companies reap the benefits of our ignorance.
There are more reasons I can think of, but those are the strongest.
Hope this helps!
You're talking to somebody who has a pretty extensive background in audio work -- though I am more or less a neophyte when it comes to video work.
I wanted to point out a few things in your post...
It is currently impossible (not to mention theoretically impossible, given the way that digital data is created) to 'perfectly reconstruct' an analog waveform in digital recording. There are digital technologies that come very close -- DSD, for example (which has sampling rates in the megahertz range) -- and perhaps it is valid to argue that, with even 96,000 samples *per second*, the differences between a digital reconstruction of an analog signal and the original signal are insignificant. But it is still imperfect. Nitpicking? Yes, I will admit that it is nitpicking.
The 44.1kHz sampling rate wasn't picked just because it added "extra cushioning"...it was picked because of then-current developments in tape-based PCM storage.
The limit of human hearing isn't strictly 20kHz -- there are people who (naturally) can't hear anything close to that and people who can hear quite a bit above that (there have been numerous instances where high frequencies have bothered the living hell out of me but nobody else around noticed anything amiss...it isn't exactly a gift). But the point of higher sampling rates isn't strictly because of a desire to capture higher frequencies -- it is also done in an attempt to create a more ideal sonic reproduction. That does account for some of the differences that some people hear (especially as you start going into even lower sampling rates). I've been working with digital audio on an almost daily basis for many years now (and analog before that), and I can hear some differences in sampling rates, but I doubt that most other people would ever notice it.
So yes, a lot of the stuff regarding high-resolution audio is fluff. A lot of great-sounding digital recordings were actually done in the earlier days of digital, when most people didn't use sampling rates beyond 50kHz (sometimes quite a bit lower), sometimes with just 12-bit depth. And there is a vast quantity of substandard crap being churned out now despite audio advances (can you listen to a modern-day pop record without getting a headache? I can't). But there are valid reasons for it as well, and truth be told, I'd rather see people embrace higher-resolution audio standards as opposed to embracing low-bitrate lossy audio compression (which unfortunately seems to be the direction that more and more people are taking, even though I think the figures regarding digital downloads outselling CDs are somewhat inaccurate).
But, of course, the same arguments could be waged in regard to video equipment :-)
I might not have the experience in audio work you have since I've only been doing this for the past 6 years or so. But I've been a musician for far longer than that, and I've learned A LOT mostly from really smart people in the industry. So, I'm not gonna lie to you and say that I've done extensive testing in this area because I simply do not have the equipment, nor the money to buy it (WAY too expensive). But we do share the neophyte status when it comes to video editing :-P
Anyways, the Nyquist Theorem is not a theory, which is what people are led to believe. It is a theorem, meaning it's already mathematically proven. It is proven that, as long as you follow the premise of capturing twice the highest frequency of the sound source, you'll get a perfect reproduction of it. To capture more than that is a waste of bandwith specially because most people won't even hear above 18KHz, nor do they have the equipment to reproduce such frequencies. Most consumer systems and audio gear, including those found in professional studios, go up to about 22KHz. You need to spend BIG dolars for anything that goes beyond that. So, who are we really making music for here? The super rich? Dolphins?
Now, I know you're not just talking about higher frequencies, but the amount of samples needed to recronstruct a perfect copy of the original waveform. OK, well, this is the kind of snake oil marketing BS I was talking about. The biggest one being that 1bit DSD crap that Sony/Phillips is pushing. Adding more samples to the recording will not make any difference on how faithfully you can reproduce a sound. It'll just make the files bigger for no reason. Again, the Nyquist Theorem already proves this. This is FACT! Here's a link I found interesting regarding these audio industry lies, maybe you will too: http://theaudiocritic.com/back_issues/The_Audio_Critic_26_r.pdf It starts on page 5, but the one pertaining this discussion is lie #3 on page 6. :-D
Don't forget that modern converters already sample at much higher frequencies than the target sampling rate. I believe my RME Fireface 400 samples at 5.6MHz, which is twice the amount of samples compared to DSD technology, before going back down to the target rate. But, like I said, it does so for other reasons and NOT because it needs that many samples in order to faithfully reproduce a waveform. Of more importance are the quality of the FIR (Finite Impulse Response) filter and the clock inside the converters. These components are what make a converter high grade, among others. The converter chips themselves are very inexpensive (in the tens of dolars) which why you hear some companies advertizing having the same converter chip as a ProTools HD rig (not the best example I know).
By the way, I didn't say humans can only hear up to 20KHz. I'm sure there are people who can hear above that. My point was that the 20Hz - 20KHz range is what's generally accepted as an average for humans (which implies that there are people who can hear avobe/below that). Also, the reason why modern-day pop records causes headaches and sound horrible is because of a totally different issue known as "The Loudness War" (I'm sure you know about it so I won't go into details). However, I do agree with you as far as compressed audio goes. Unfortunately, there's a reason for that and there's nothing we can do about it until the day Internet bandwith becomes more accessible and cheaper. Eventually it'll get to the point where uncompressed audio can be streamed reliably through the net. But, until then, we're stuck with MP3, AAC, DTS and other audio compression formats. As far as digital media distribution goes, it's the future and companies are seeing that. More and more people download music rather than buying CDs, so I do believe those numbers are accurate. Just look at sales from iTunes and even games like Guitar Hero and Rock Band. It's just a matter of time.
It is proven that, as long as you follow the premise of capturing twice the highest frequency of the sound source, you'll get a perfect reproduction of it.
Given the fact of quantization error, how can anyone well versed in digital signals actually claim that? I just don't believe it's possible to get a 'perfect' reproduction of any analog signal, audio, video or otherwise.
You're right, and I was actually hoping Hacienda would've caught that. But he's reasoning as to why a digital representation of a waveform is not perfect is incorrect. This doesn't mean that what I said about the Nyquist Theorem is incorrect too. What I meant is that the formula is perfect, and would give us a perfect representation of it's analog counterpart given a perfect implementation. But this is sadly not the case in the real world, were the laws of physics apply. We're unable to capture a perfect copy of the original waveform, not because we need more samples to define such waveform, but because of inherent defects found in the ADC process (such as quantization errors as you've pointed out).
As I'm sure you know, there are no real 24 bit converters. The best audio converters I know of are from Lavry Engineering which have a real resolution of ~21 bits and a noise floor of -127dB. These are ~$4,000 USD converters and I believe none of them go up to 192KHz. Why? Because over at Lavry Engineering they don't use BS marketing to sell their products (they don't need to). So even the best converters in the audio industry are not able to deliver true 24 bit resolution due to real world limitations found in integrated circuit design. 24 bits theoretically gives us 16,777,216 levels of resolution which is a finite number compared to the infinite resolution of an analog waveform. This is where quantization errors appear, since we're now limited to a finite number of samples. The solution to this is Dithering With Dithering the LSB (Least Significant Bit) is being stimulated with low level noise to randomize quantization errors. This low level noise actually sounds more natural to us as opposed to the undithered file. But I'm sure you know all about this too.
My point is that Resolution comes from Bit Depth and not from Sample Rate. More resolution (or Bit Depth) would get us closer to the analog waveform we're capturing. But, alas, we're unable to achieve that for the reasons I explained earlier. On the other hand, adding more sample points does not increase the resolution, but it does give you enough information to copy an analog signal as long as you follow the Nyquist Theorem.
This doesn't mean that what I said about the Nyquist Theorem is incorrect too.
Like you say, in a "real world, practical" sense, it kind of does.
Because of the trajectory this discussion is taking, I'm not certain whether you are referring to my post or to Jose's post (though considering that you are quoting his post, I'll assume the latter). For the record, I have never stated that a perfect digital reproduction of an analog waveform is possible, because at least as of right now, it is not possible (though as Jose has pointed out, the model presented in the theorem is mathematically correct). But truthfully, I'm not interested in nit-picking over this, since Jose is obviously passionate about his opinion and I am holding fast to mine, so let me just end this by saying that my only interest in higher sampling rates (and by extension, higher bit depths...despite my interest in technologies like DSD, which may very well be snake oil, I also realize the benefits of higher bit depths that PCM currently provides) is strictly in the interest of achieving something closer in the digital realm to a 'perfect' reproduction of an analog waveform without resorting to using analog mediums, which have their own imperfections as well (though I don't necessarily mean that to be a negative thing -- I rather like the sound of tape).
I have never stated that a perfect digital reproduction of an analog waveform is possible
That's what I was agreeing with. Practical limitations prevent any type of perfect reproduction, theorems be dammed.
I also agree that using higher sampling frequencies is a well established technique used to improve overall sound quality, though I won't argue the specifics of how that is achieved.
Aww, and here I thought we were having an interesting conversation :-(
Just a few things before I go. Yes, using higher sampling rates is a well established technique in mastering houses because those guys have very expensive analog equipment that they prefer to use when processing audio signals. So, in the quest of keeping your music at the highest fidelity, they will upsample it in order for it to withstand another trip through their converters (and thus their analog processing chain).
Some people also like working with high sample rates because their plugins sound better this way (and this is true). Although, today we have plugins, like those from UA (Universal Audio), that already upsample audio signals internally. Others use high sample rates simply because they don't have good converters and actually hear a difference at those higher rates (which is the case with bad converter designs). But the difference is that Mastering Engineers have the equipment to get away with it, while most of us don't (unless you have a state-of-the-art studio that is). If you ask a good Mastering Engineer which sample rate to use for your music he'll most likely answer by saying to use the sample rate of your final format (44.1KHz for CD, 48KHz for DVD and 96KHz for DVD-A). I just think that 192KHz is WAY overkill and will not give you any benefits compared to 88.1KHz or 96KHz (only drawbacks).
Anyways, I was really trying to help Hacienda and did not engage in this conversation to brag about my knowledge in the subject (which seems to be how I've been percieved).
Good luck guys! :-)