I have a heavily clipped piece of music I wish to declip. If I scan the audio with Audition (CS6) “Amplitude Statistics”, it reports the number of clipped samples in their thousands.
If I scan within declipper, it reports none. In fact even if I deliberately increase the amplitude on the audio by +10db or more, the declipper still reports no clipped samples.
The declipper tool is simply not working, and no matter what I try, it never reports clipped samples.
Any ideas anyone?
You will probably need to adjust the settings to match your audio, particularly the Tolerance value. Larger values cut further into the clipped audio. It depends what actual level your clipped sample are. Perhaps it would be easier if it was called Threshold rather than Tolerance and was in dB rather than percentage.
The recommended setting (from what I read) for the Tolerance is around 1%. I have tried adjusting this value all the way to its maximum of 50%, but it still falls a long way short.
As an example, I’m using a sample piece of audio with a sample rate of 44.1KHz, 32 bit Depth. If I use the Amplitude Statistics tool, it reports 2936 potential clipped values for the left channel, and 3749 potentially clipped samples for the right channel. Now I’ve always found the Amplitude Statistics tool to be highly reliable when analysing audio for clipped samples. Combined with the Declipper and a few other tweaks, I can carry out some pretty good audio restoration.
Using the above values, the Declipper at 1% tolerance returns no clipped samples, and at the full 50% it returns 1216 problem samples. I know the deliberately clipped sample I’m working with has a maximum over clipping of +1.8db, so setting the Gain at -2.0db, a Tolerance of 1%, Min Clip Size of 3 and using the FFT interpolation should give me pretty good results – it does nothing.
I can iteratively go through the process of rescanning/analysing and repeatedly fixing the samples found on each pass, but ultimately, the faithfulness of the audio is destroyed.
Given the ability to set the tolerance in the first place, I’m at a loss to explain Audition’s behaviour on why the Audio Statistics tool and the Declipper report such huge variations on what amounts to analysing the same aspect of the audio.
I’m now going to have to look for a third party plugin that can perform the declipping fuction. Not good Adobe!
If I use the Amplitude Statistics tool, it reports 2936 potential clipped values for the left channel, and 3749 potentially clipped samples for the right channel. Now I’ve always found the Amplitude Statistics tool to be highly reliable when analysing audio for clipped samples.
Hmm... my experience is that it's only a basic analysis, and often can get this figure considerably wrong. And the devs know this - which is why they are described as potentially clipped. The big problem with clipping is that it's not just about samples that get to the edge of the gamut, but the effect of the 7-8 samples each side of a 0dB one, which are the ones that really affect the degree of overshoot. Every time a sample hits 0dB it could be completely legitimate; a sample that falls exactly at the top of a sine wave is there quite legitimately, but a string of samples just below this level can represent a considerable overload. So, the stats analysis counts samples in positions that could potentially represent a problem, but doesn't do the complete analysis at this stage - simply because it's too time-consuming.
And yes, the whole thing's not so easy to explain without pictures, as it's almost counter-intuitive. Fortunately for you, there's a document I prepared some time back that shows how the overload occurs with legitimate samples, using Audition 1, extremely clearly. You can look at this here.
I appreciate the Amplitude Statistics tool is not fully accurate – we agree it is potential clipped samples which I’ve already stated. I also appreciate how modifying a sample has a knock on effect on adjacent samples. In the good old days when you could actually drag a single sample with the mouse, you could see exactly how change and effect worked in a very visual way.
However, that does not detract from the fact that a piece of audio, “I know” to be clipped, regardless of what the Amplitude Statistics tool reports and how much you believe its findings, when analysed by the Declipper tool (with sensible settings) it returns nothing.
We appear to have one tool (Amplitude Statistics ) that some may regard as aggressive at returning high number of “potential” clipped samples, and a different tool (Declipper), which is so conservative, it returns none.
There’s no consistency here, and it’s hard to determine what is best.
I wait with interest on Adobe’s forthcoming (at least here in the UK) “Adobe Audition CS6 Classroom In a Book” has to say on the subject!
Further finding on the use of the Declipper.
Please have a look at the following https://docs.google.com/open?id=0B_sOlIozDEkiTVBqVXhRdFREcjA
Not conclusive evidence, but interesting nontheless.
It's not evidence of anything, unfortunately, except some more misunderstanding. It's not samples that clip - they can't. That waveform wasn't clipped, or overshooting - if it was, then it would have looked more like the ones in my pdf that go above 0dB. And, just boosting the level of a waveform in edit view won't cause clipping unless it's specifically opened as an integer 16-bit file, and the ones in the example don't look as though they are - this looks like a 32-bit FP file.
And that's the problem really. To establish whether something really is 'clipped' as such isn't simple at all. You are essentially looking for the symptoms of an input overload at the hardware, digitising stage, and the effect that this would have on the resultant waveform - and that's not 100% predictable from a simple analysis.
I'm not saying that Audition necessarily gets it correct - I haven't even begun to test that hypothesis. All I'm saying is that you should very much beware of simple answers to complicated questions, and vice versa. Both are likely to be wrong.
Yes, you are quite correct, I wish I’d used a truly clipped sample (unfortunately none to hand at the time of writing) than simply adding gain to a waveform that sends some of the samples above 0dB. If the waveform sample had been truly clipped, all of the samples exceeding 0dB would have aligned at 0dB – and in that sense, the shape of the waveform beyond 0dB is lost. We can only approximate what it may have been by comparing samples to each side.
I understand your argument as presented in the example you provided the link for, but you’ve manipulated samples in isolation and shown the effect on a waveform that goes beyond 0dB. You could have done the same thing at -20dB; the waveform would have looked the same, just lower in amplitude. I wouldn’t call that clipping, I’d call it an example of what happens when a digital representation of the real waveform is inprecise. I would see your example as more accurately describing sampling errors, which due to the algorithm that seeks to recreate the true waveform, does the best it can, and hence you see the result. It will work, it just isn’t right.
In the case of a digital recording system that’s presented with a sound it cannot represent as an accurate value (e.g. recording a sound that overloads the input), it substitutes the highest legitimate value it can accommodate, which is 0dB. So if sample 20 is 0dB and sample 30 is 0dB, and samples 21-29 are greater that 0dB, the digital system recording the incoming audio will show all samples from 20 to 30 as 0dB. I know that’s another oversimplification, but that at least in my mind that would be clipping, though you may call it some else.
You could have done the same thing at -20dB; the waveform would have looked the same, just lower in amplitude. I wouldn’t call that clipping, I’d call it an example of what happens when a digital representation of the real waveform is inprecise. I would see your example as more accurately describing sampling errors, which due to the algorithm that seeks to recreate the true waveform, does the best it can, and hence you see the result. It will work, it just isn’t right.
Clipping's not quite as straightforward as that, I'm afraid. If the samples are in the correct relative positions, then there has been no overload situation in the digital sense - but if you try to output the waveforms I created (which could occur quite legitimately), you'd almost certainly find that the D-A converter itself would clip, as you would have exceeded its analog output capability. But the samples, as I said, could be legitimate... and to be fair, most manufacturers these days recognise this situation, and allow sufficient headroom for the worst case output scenario.
Clipping is only a form of distortion - in other words, the arrangement of the samples does not represent the rates of change of the input signal. So any sample that's correctly in place at 0dB isn't clipped. Real-world clipping, when it occurs, has to occur either in the A-D or D-A conversion. So let's be very clear - clipping isn't a digital thing at all, and correcting it (since you have no original reference) is at best a statistical process. On a rising waveform, what you have to analyse is whether the rate of change increase that suddenly arises when a sample can't go as high as the previous several samples might indicate is correct or not. In the case of a string of 0dB samples, then there's an extremely good chance that they are incorrect (unless this is a high amplitude square wave...), but you still have no information, other than a spectral analysis guess from either side, as to what the correct positions should be.
And it's this that the Amplitude statistics can't work out, because there simply isn't time. And it's also what causes the clipping analysis sometimes to come up with results that don't appear to make sense. It really is a case of analysing where a waveform is rising, and where it falls again, and trying to correlate the samples in between to represent where it thinks the peak should be. Obviously the closer you get to the Nyquist limit, the easier this is, because in the limiting case, you can guarantee that a sample that appears to be too low (apparent increase in rate of change) actually is. The lower the rate of change though, the harder to analyse, and more imprecise, that process gets.