Over sampling and hard/soft clipping

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Zaphod (giancarlo) wrote: Mon Nov 28, 2022 6:48 pm
mystran wrote: Mon Nov 28, 2022 11:50 am Neither really. The principal reason is to give you a warm and fuzzy reason and help sell more plugins.
no one does such a thing for dacs. Realistically it's done for mp3 conversion or the like, especially when you're distributing a song for people to listen to.
The problem though is that lossy compression can introduce additional inter-sample peaks (and even sampled values exceeding the nominal bounds) by quantizing (and possibly discarding some of) the frequency content, so limiting the source material doesn't mean much and any decoder will need headroom either way... and some codecs do far worse things than a little bit of ISP clipping anyway.

I'm not necessarily against inter-sample limiting, because theoretically it's the correct thing to do and it makes a limiter less sensitive to phase variation so the results are more consistent and it's reasonable to expect that it just sounds better... and it's certainly possible to come up with degenerate signals where the inter-sample peaks are so high they might realistically cause issues, but the idea that you can somehow predict the exact final peak levels of a reconstructed signal is just nonsense and DAC or codec or whatever that doesn't have any headroom is just broken.

Post

Music Engineer wrote: Mon Nov 28, 2022 9:26 pm
mystran wrote: Mon Nov 28, 2022 9:08 pm From a purely theoretical point of view I don't think there's really an upper bound. For practical purposes the theoretical upper bound depends on the reconstruction filter.
Hmm...I mean, the Wilbraham-Gibbs constant already assumes an ideal brickwall filter. Could an actual, non-ideal filter actually be *worse* than that in terms of overshoot?
As far as I understand the constant applies to Fourier series (ie. periodic functions), so there is a countable number of sinusoids to work with... plus it's about approximation of bounded piece-wise continuous functions, not sampled signals. If we assume an infinite sampled signal (eg. an infinite sine with a phase discontinuity in the middle) then significantly worse situations are possible. I don't think there's really a finite bound.

ps. For a reconstruction kernel with finite support though, the limit is basically the largest sum of absolute values for any subsample offset, because if we know the kernel, we can generate a signal such that it's +1 where the kernel is positive and -1 where it's negative.
Last edited by mystran on Mon Nov 28, 2022 9:53 pm, edited 1 time in total.

Post

mystran wrote: Mon Nov 28, 2022 9:41 pm As far as I understand the constant applies to Fourier series (ie. periodic functions), so there is a countable number of sinusoids to work with... plus it's about approximation of bounded piece-wise continuous functions, not sampled signals. If we assume an infinite sampled signal (eg. an infinite sine with a phase discontinuity in the middle) then significantly worse situations are possible.
Hmm - OK - I need to think about this a bit.
we can generate a signal such that it's +1 where the kernel is positive and -1 where it's negative.
Aha! Yes! That makes sense.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Music Engineer wrote: Mon Nov 28, 2022 9:52 pm
mystran wrote: Mon Nov 28, 2022 9:41 pm As far as I understand the constant applies to Fourier series (ie. periodic functions), so there is a countable number of sinusoids to work with... plus it's about approximation of bounded piece-wise continuous functions, not sampled signals. If we assume an infinite sampled signal (eg. an infinite sine with a phase discontinuity in the middle) then significantly worse situations are possible.
Hmm - OK - I need to think about this a bit.
Take your kernel halfway between integral samples. Take the sign function of the result and scale to the maximum allowed amplitude. That's (at least most likely, assuming a linear-phase kernel) the worst-case input and the worst-case output is the sum of the absolute values of the filter taps at this time offset.

Post

mystran wrote: Mon Nov 28, 2022 9:56 pm Take your kernel halfway between integral samples. Take the sign function of the result and scale to the maximum allowed amplitude.
so, like:

x[n] = sign(h[n + 0.5])?

...assuming max-amplitude = 1. What's the purpose of taking it halfway between samples - and how would I do this? Linear interpolation? ...or sinc-interpolation?
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Music Engineer wrote: Mon Nov 28, 2022 10:05 pm
mystran wrote: Mon Nov 28, 2022 9:56 pm Take your kernel halfway between integral samples. Take the sign function of the result and scale to the maximum allowed amplitude.
so, like:

x[n] = sign(h[n + 0.5])?

...assuming max-amplitude = 1. What's the purpose of taking it halfway between samples - and how would I do this? Linear interpolation? ...or sinc-interpolation?
That's kinda poor wording. For continuous-time reconstruction you need a continuous-time kernel and what I mean is sampling this kernel halfway between integer multiples of the sampling period, so that we get the weights used to reconstruct the signal halfway between sampling points. This is where at least for a linear-phase kernel you'd expect either the maximum possible inter-sample peak, or at least something close to it.

To maximize the innerproduct (during convolution) with a signal (subject to some maximum amplitude, which we can take as unity) you can then choose the "evil" signal as the sign-function of this sampled kernel. This makes each element-wise multiplication positive (signs cancel) so the inner-product become sum of the absolute values of the (sampled) kernel taps.

For a polyphase kernel with finite number of branches, you can also just take the sum of absolute values for each branch and pick the branch that gives the largest value. Again, if you then take that branch and extract the signs as a signal, you have the worst-case excitation.

Now, wolfram alpha gives partial expansion in terms of digamma function for the sum sin(pi*(n+.5))/(pi*(n+.5)), n = -inf to inf, so I don't think there's a finite bound for the ideal kernel... but there is a finite bound for any (bounded) kernel with compact support.. for hopefully obvious reasons.

Post

OK - I see. I was confusing the discrete with the continuous time domain.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

mystran wrote: Tue Nov 29, 2022 2:03 am Now, wolfram alpha gives partial expansion in terms of digamma function for the sum sin(pi*(n+.5))/(pi*(n+.5)), n = -inf to inf, so I don't think there's a finite bound for the ideal kernel... but there is a finite bound for any (bounded) kernel with compact support.. for hopefully obvious reasons.
Well, 1/1 + 1/2 + 1/3 + ... + 1/n does diverge, although very slowly (as ln(x)).

So for the ideal sinc filter that "killer" signal would be a (-1)^n (same for any positive-windowed sinc which are most of them).

That (-1)^n is the interesting signal. As far as I am aware it doesn't correspond to any analog signal obtained through sampling, because Nyquist theorem doesn't allow reconstruction of that corner-case frequency.

Maybe someone with more knowledge can elaborate more on that particular corner-case signal.

Post

2DaT wrote: Tue Nov 29, 2022 10:25 pm So for the ideal sinc filter that "killer" signal would be a (-1)^n (same for any positive-windowed sinc which are most of them).
Isn't that just the half sampling-rate cosine that assumes it's peaks at the original samples. Halfway between the original samples, the series from 0 towards +inf and -inf would both diverge, but with opposite sign and equal magnitude, so the limit is still zero, isn't it? That's a rather theoretical situation, but practically speaking after windowing this is no longer an issue (compact support) and we'd usually expect the gain at cutoff to be around -6dB (ie. approximately the average of left and right limits).

If we flip the sign of one half of our signal, eg. sign(n)*(-1)^n (so that there is exactly one duplicate sample), then there is a point in time where both limit series will diverge with the same sign and the peak is unbounded for ideal reconstruction.

Post

mystran wrote: Wed Nov 30, 2022 9:39 am
2DaT wrote: Tue Nov 29, 2022 10:25 pm So for the ideal sinc filter that "killer" signal would be a (-1)^n (same for any positive-windowed sinc which are most of them).
Isn't that just the half sampling-rate cosine that assumes it's peaks at the original samples. Halfway between the original samples, the series from 0 towards +inf and -inf would both diverge, but with opposite sign and equal magnitude, so the limit is still zero, isn't it?
And what corresponding sine would be ?

Post

2DaT wrote: Wed Nov 30, 2022 1:11 pm
mystran wrote: Wed Nov 30, 2022 9:39 am
2DaT wrote: Tue Nov 29, 2022 10:25 pm So for the ideal sinc filter that "killer" signal would be a (-1)^n (same for any positive-windowed sinc which are most of them).
Isn't that just the half sampling-rate cosine that assumes it's peaks at the original samples. Halfway between the original samples, the series from 0 towards +inf and -inf would both diverge, but with opposite sign and equal magnitude, so the limit is still zero, isn't it?
And what corresponding sine would be ?
In real-valued sampling there is none. It's the exact same situation as with DC. For a sampled signal (or a digital filter, or whatever) to be real, we need to satisfy the conjugate symmetry of the z-transform as H(exp(i*w))=conj(H(exp(-i*w)) and since exp(i*pi)=exp(-i*pi) the transform must be real-valued.

As far as I can see, you could state the perfect reconstruction property as signals below half the sampling rate and the sampling grid aligned component of a signal exactly at half the sampling rate, but ... the slight generalization is usually of little practical interest (can't have a brickwall outside mathematics anyway).

Post Reply

Return to “DSP and Plugin Development”