Shortest pulse width to support

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Z1202 wrote: Thu Nov 21, 2019 11:20 am
mystran wrote: Mon Nov 11, 2019 4:56 pmbut most methods of band-limited waveform generation tend to assume that frequency is changing slowly enough that the effects can be neglected, because this makes the problems much more tractable.
I think that BLEP method is immune to audio rate modulation as long as the segments (after the modulation) between the transients are of polynomial shape (linear, parabolic etc.). This keeps the segments themselves "bandlimited" and simultaneosly keeps the number of derivative discontinuities finite. So in case of PWM I think BLEP method works perfectly (save the timelimiting of BLEPs), provided you identify the subsample positions of transitions correctly.
A piecewise polynomial waveform is a piecewise polynomial waveform of a higher degree (and possibly more segments) if the modulator is a piecewise polynomial waveform. If the modulation increases the number of discontinuous derivatives, you need to BLEP derivatives and you also should correct for the discontinuities of the modulator. Usually we do none of this and just sample the (arbitrary) modulation signal once per sample (ie. the "continuous-time" modulation is ZOH signal).

Post

mystran wrote: Thu Nov 21, 2019 11:59 amIf the modulation increases the number of discontinuous derivatives, you need to BLEP derivatives and you also should correct for the discontinuities of the modulator.
Generally correct, but for the PWM you don't need to. As long as you can represent the modulator analytically and solve for the transient position you should be able to do perfect antialiasing (limited only by the blep length).

Post

Z1202 wrote: Thu Nov 21, 2019 12:17 pm
mystran wrote: Thu Nov 21, 2019 11:59 amIf the modulation increases the number of discontinuous derivatives, you need to BLEP derivatives and you also should correct for the discontinuities of the modulator.
Generally correct, but for the PWM you don't need to. As long as you can represent the modulator analytically and solve for the transient position you should be able to do perfect antialiasing (limited only by the blep length).
Oh, sure. Usually treating the modulator analytically is excessive though, as treating it as linear over the sampling period is usually good enough to avoid any glitches.

Post

Z1202 wrote: Thu Nov 21, 2019 11:20 am I have a feeling that a signal can be seen as "generically bandlimited" iff the sequence of BLEPs fixing the derivative discontinuities converges as we fix more and more of derivatives, but I don't have a proof. When looking around I found Paley-Wiener-Schwartz theorem which might be closely related to that question.
I have a feeling that it is essentially a matter of whether or not some polynomial approximation to the function converges. If it does, then we can replace the function by such a polynomial and bandlimit "exactly" with a finite number of BLEPs. The remaining error then is the approximation error which (if I'm not mistaken) is directly related to higher derivatives of the original function.

Post

mystran wrote: Fri Nov 22, 2019 3:22 am
Z1202 wrote: Thu Nov 21, 2019 11:20 am I have a feeling that a signal can be seen as "generically bandlimited" iff the sequence of BLEPs fixing the derivative discontinuities converges as we fix more and more of derivatives, but I don't have a proof. When looking around I found Paley-Wiener-Schwartz theorem which might be closely related to that question.
I have a feeling that it is essentially a matter of whether or not some polynomial approximation to the function converges. If it does, then we can replace the function by such a polynomial and bandlimit "exactly" with a finite number of BLEPs. The remaining error then is the approximation error which (if I'm not mistaken) is directly related to higher derivatives of the original function.
Consider a sine above Nyquist with a sudden change in phase. The sine itself is an analytic function thus its Taylor polynomial approximations converge. However the BLEPs (for different derivative orders) need to be scaled by the amounts of jumps in the respective derivatives. The latter are however proportional to w^n. So we need to make sure that the sum of w^n*BLEPn converges (where BLEPn is the unit BLEP residual for the n-th derivative). Now are you suggesting that w^n*BLEPn->0 for all w?
I mentioned the Paley-Wiener-Schwartz theorem because it connects the bandlimitedness (or IIRC actually its dual, timelimitedness, but it shouldn't matter) with the decay rate of Taylor series coefficients.

Post

Z1202 wrote: Fri Nov 22, 2019 12:17 pm So we need to make sure that the sum of w^n*BLEPn converges (where BLEPn is the unit BLEP residual for the n-th derivative). Now are you suggesting that w^n*BLEPn->0 for all w?
I'm essentially suggesting that for any compact segment of any function with convergent power series, the sum of ideal BLEPs converges (although not necessarily fast) to the difference between the original segment and it's band-limited version.

That said, I'm not suggesting this holds for practical BLEPs with compact support and finite attenuation, nor do I suggest that this necessarily works in finite-precision floating point arithmetics (which actually tends to be the limiting factor with regards to maximum workable BLEP order).

edit: actually let me clarify that this clearly doesn't work if you insert a series of BLEPs at one end-point only, but I'm suggesting it does work when you consider both end-points together

Post

mystran wrote: Fri Nov 22, 2019 2:45 pm
Z1202 wrote: Fri Nov 22, 2019 12:17 pm So we need to make sure that the sum of w^n*BLEPn converges (where BLEPn is the unit BLEP residual for the n-th derivative). Now are you suggesting that w^n*BLEPn->0 for all w?
I'm essentially suggesting that for any compact segment of any function with convergent power series, the sum of ideal BLEPs converges (although not necessarily fast) to the difference between the original segment and it's band-limited version.

That said, I'm not suggesting this holds for practical BLEPs with compact support and finite attenuation, nor do I suggest that this necessarily works in finite-precision floating point arithmetics (which actually tends to be the limiting factor with regards to maximum workable BLEP order).

edit: actually let me clarify that this clearly doesn't work if you insert a series of BLEPs at one end-point only, but I'm suggesting it does work when you consider both end-points together
I don't see how the same reasoning which I gave in my example doesn't work for a compact segment of a sine above Nyquist. My conjecture is that the sum of ideal BLEP residuals will diverge in this case. Notice that the BLEP amplitude scaling factors grow faster than pi^n (for T=1). Or did I misunderstand what you're saying?

Post

Z1202 wrote: Fri Nov 22, 2019 3:08 pm
mystran wrote: Fri Nov 22, 2019 2:45 pm
Z1202 wrote: Fri Nov 22, 2019 12:17 pm So we need to make sure that the sum of w^n*BLEPn converges (where BLEPn is the unit BLEP residual for the n-th derivative). Now are you suggesting that w^n*BLEPn->0 for all w?
I'm essentially suggesting that for any compact segment of any function with convergent power series, the sum of ideal BLEPs converges (although not necessarily fast) to the difference between the original segment and it's band-limited version.

That said, I'm not suggesting this holds for practical BLEPs with compact support and finite attenuation, nor do I suggest that this necessarily works in finite-precision floating point arithmetics (which actually tends to be the limiting factor with regards to maximum workable BLEP order).

edit: actually let me clarify that this clearly doesn't work if you insert a series of BLEPs at one end-point only, but I'm suggesting it does work when you consider both end-points together
I don't see how the same reasoning which I gave in my example doesn't work for a compact segment of a sine above Nyquist. My conjecture is that the sum of ideal BLEP residuals will diverge in this case. Notice that the BLEP amplitude scaling factors grow faster than pi^n (for T=1). Or did I misunderstand what you're saying?
I'm suggesting the BLEPs from the two end-points eventually mostly cancel each other, such that the seemingly high amplitude growth doesn't really matter.

That said, I have not investigated this subject in a rigorous manner, so this is mostly based on intuitive understanding of the subject. What I do know is that such band-limiting of any arbitrary piece-wise polynomial waveform does work, even past Nyquist until you run into finite-precision issues and I see no reason why this wouldn't work for functions with infinite derivatives if you take the BLEP-series to the limit, although I'm also not convinced such a theoretical result would be of much practical value.

edit: As far as I can see, this would basically just be the natural consequence of an analytic function being completely characterized by it's derivatives.

Post

mystran wrote: Fri Nov 22, 2019 3:24 pmI'm suggesting the BLEPs from the two end-points eventually mostly cancel each other, such that the seemingly high amplitude growth doesn't really matter.
Intuitively, I very much doubt that.
mystran wrote: Fri Nov 22, 2019 3:24 pmThat said, I have not investigated this subject in a rigorous manner, so this is mostly based on intuitive understanding of the subject. What I do know is that such band-limiting of any arbitrary piece-wise polynomial waveform does work, even past Nyquist until you run into finite-precision issues and I see no reason why this wouldn't work for functions with infinite derivatives if you take the BLEP-series to the limit, although I'm also not convinced such a theoretical result would be of much practical value.
The reason would be that the BLEP series simply doesn't converge :D Not unlike Fourier transform, which doesn't have to converge for arbitrary analytic function (even if you take Cesaro convergence into account).

Edit: regarding "piece-wise polynomial waveforms past Nyquist", I'm not sure what you're referring to. In my (I have to admit, not very rigorous) understanding, any finite polynomial doesn't have frequencies above 0, you really need infinite series to get nonzero frequencies in your spectrum.

Edit2: the practical value would be that we will better understand bandlimiting of piecewise waveforms. Can we consider a piecewise waveform bandlimited, if all derivative discontinuities have been patched by BLEPs or not? E.g. for a sine above Nyquist it's likely that it's still not bandlimited, right? But my conjecture is that the BLEP series won't converge either. Now, can a piecewise exponential function be bandlimited by BLEPs or not? That's a very practical question, isn't it?

Post

Z1202 wrote: Fri Nov 22, 2019 4:28 pm Edit2: the practical value would be that
...that you can approximate anything with cubic splines and push the error as low as you want by using shorter segments. :P

Post

mystran wrote: Fri Nov 22, 2019 5:09 pm
Z1202 wrote: Fri Nov 22, 2019 4:28 pm Edit2: the practical value would be that
...that you can approximate anything with cubic splines and push the error as low as you want by using shorter segments. :P
:D
Interesting approach. But the amount of BLEPs can become prohibitively expensive :D [Edit: furthermore, I suspect the sums of BLEP sums from neighboring points might create precision loss issues, as they will often almost cancel each other, so at least doubles might be necessary.]
I'd still would like to know whether a "proper" segment can be considered bandlimited. And if my conjecture is correct, it is bandlimited iff the BLEPs converge, which means that you can use bleps to bandlimit any piecewise-bandlimited signal, not only cubic-spline-based :D Furthermore, the PWS theorem gives you an easy way to check whether your segment is bandlimited.

Post

Another random thought: if you're fitting splines (eg. cubics) to a function that is known in advance (save for scaling and time-stretching), then you could solve for a natural spline and only have to BLEP one derivative (eg. natural cubic is C2).

No idea if it's "profitable" in practice, but just thought I'd share the idea anyway.

Post

Some functions (sine and exp) can be approximated with exponentially convergent coefficients by polynomials. Don't BLEPn converge rapidly in L2 sense? I don't remember exactly, but if they do then we can use shorter bleps for higher orders.

Post

2DaT wrote: Tue Nov 26, 2019 11:00 pm Don't BLEPn converge rapidly in L2 sense? I don't remember exactly, but if they do then we can use shorter bleps for higher orders.
I looked into this quite extensively at some point and came to a conclusion that the prototype low-pass has to be sufficiently flat around the DC, or the DC term of the higher order integrals of the residues doesn't vanish. In practice, this leads to a situation where you need to use longer kernels for the higher order BLEPs just to get well-behaved integrals.

I'd like to emphasise that this does NOT appear to be a numerical issue (except as far as numerically the "flatness" condition is obviously a bit fuzzy). It can be demonstrated with polynomial interpolation kernels with predictable DC flatness: the BLEPs are well behaved exactly as long as you have enough zero derivatives around DC and once you run out of those, the next order BLEP is simply complete garbage.

You can optimise the flatness condition by using a Lagrange-interpolator (which is maximally flat around DC) as the prototype kernel, but these have their own problems (eg. sidebands around multiples of Nyquist, can't really control cutoff frequency, etc). That said, they seem optimal if you want the maximum number of well-behaved BLEPs for the shortest kernel possible.

Post

Z1202 wrote: Fri Nov 22, 2019 4:28 pmThe reason would be that the BLEP series simply doesn't converge :D Not unlike Fourier transform, which doesn't have to converge for arbitrary analytic function (even if you take Cesaro convergence into account).
Actually, upon further thought you might be right.

The nth derivative of exp(a*t) comes out as a^n*exp(a*t), which tends towards zero for increasing n iff |a|<1, for all a in C (with a=0 giving us the constant function and a=i*b giving sinusoids). I'm not sure what to make out of this exactly (ie. what is the scale of "t"), except as far as it suggests there might be a dependency on the angular frequency.

Post Reply

Return to “DSP and Plugin Development”