Fast Modulation of Filter Parameters

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

mystran wrote: Thu Apr 25, 2019 10:59 am LTI stands for "linear, time-invariant"
As I've already stated.
mystran wrote: Thu Apr 25, 2019 10:59 amand it's the "time-invariant" property that you are violating when you change the coefficients.
Obviously.

Sorry you must have meant something else. Not sure I'm getting you.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

I don't understand "or denounce" exploding :) But correct me if im wrong, I think this is because the filter acts as a buffer and carries excess momentum forward, ZDF taking momentum out by flushing the array acting like a highpass filter to a lowpass filter is mitigating this issue.

Oversampling by factors of two or four is not solving for the ranges found in analog.

Post

S0lo wrote: Wed Apr 24, 2019 11:12 amAny modulation speed of filter cut-off higher than 2.5Khz will make it explode.
Cutoff modulation of TPT structures is highly unlikely (as in "maybe impossible", although a formal proof doesn't exist yet AFAIK) to explode. Especially, since the effect is pretty much independent of the sampling rate, it hints at the fact that the analog filter would also explode in this case (for which we do have a formal proof). Meaning, it's not just the cutoff modulation and there is highly likely to be a mistake somewhere.

A small possibility would be that you exceed the prewarping range and your prewarped cutoff gets very high (say, more than 16 or so), so that there are precision losses, or even negative.

Edit: IIRC, A.Wishnik showed that a TPT SVF doesn't explode under cutoff and resonance modulation, which further hints at a possible mistake in your implementation.
Last edited by Z1202 on Thu Apr 25, 2019 12:10 pm, edited 1 time in total.

Post

Skupje wrote: Thu Apr 25, 2019 11:10 am I don't understand "or denounce" exploding :) But correct me if im wrong, I think this is because the filter acts as a buffer and carries excess momentum forward, ZDF taking momentum out by flushing the array acting like a highpass filter to a lowpass filter is mitigating this issue.
In easy terms, an exploding filter basically means that output of the filter would keep increasing indefinitely until it causes a numerical overflow. As you would expect, there is practically no use of such a filter. It wouldn't produce any usable sound. And the overflow will eat the CPU alive. You could however clip the output internally (limit it) to prevent the filter from exploding. You would get some sound. But it's horrendous to say the least.
Skupje wrote: Thu Apr 25, 2019 11:10 am Oversampling by factors of two or four is not solving for the ranges found in analog.
Not even 32x oversampling would do any good for it, like at all. And I highly doubt that any thing higher will do. I could try further when I have time today.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

S0lo wrote: Thu Apr 25, 2019 11:52 am In easy terms, an exploding filter basically means that output of the filter would keep increasing indefinitely until it causes a numerical overflow.
That has to be a code error, typically the output should be 1/1 or very "digitally" near to it.

I did make a mistake.. Not shure tho. wether or not the previous integration index affects the next that much? Guess not in retrospect..

Post

Z1202 wrote: Thu Apr 25, 2019 11:46 am A small possibility would be that you exceed the prewarping range and your prewarped cutoff gets very high (say, more than 16 or so), so that there are precision losses, or even negative.
I just tried this. I set the base cutoff frequency of the filter at 1Khz. And set a modulation depth such that highest frequency would be less than 2Khz and lowest would be about 500Hz. Then started modulating with a sin wave. Same results, explosion when the sine wave hits 2Khz or higher.
Z1202 wrote: Thu Apr 25, 2019 11:46 am Cutoff modulation of TPT structures is highly unlikely (as in "maybe impossible", although a formal proof doesn't exist yet AFAIK) to explode. Especially, since the effect is pretty much independent of the sampling rate, it hints at the fact that the analog filter would also explode in this case (for which we do have a formal proof). Meaning, it's not just the cutoff modulation and there is highly likely to be a mistake somewhere.
Hmmm, the thing is that if there was a mistake I would have expected the filter not to work at all even without modulation. But, t's working, identical to the original pole-zero filter prior to inverse BLT. Pretty much the same frequency response as far as I can hear and see with a spectrum analyzer.

I don't know if you've read the posts before mine. Let me get back to the original point here and ask a question. Why would a pole-zero filter that was originally unstable at fast modulation become stable when we apply inverse BLT and then apply TPT (essentially applying BLT again). ?

Edit: Basically The S-domain transfer function that we got with inverse BLT will become invalid once we modulate the cut-off. Therefore, the derivation would be wrong from the start.
Z1202 wrote: Thu Apr 25, 2019 11:46 am Edit: IIRC, A.Wishnik showed that a TPT SVF doesn't explode under cutoff and resonance modulation, which further hints at a possible mistake in your implementation.
Thats good to know
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

S0lo wrote: Thu Apr 25, 2019 8:35 pmHmmm, the thing is that if there was a mistake I would have expected the filter not to work at all even without modulation.
Depending on the kind of the mistake. E.g. what if your structure is not true TPT?
S0lo wrote: Thu Apr 25, 2019 8:35 pmWhy would a pole-zero filter that was originally unstable at fast modulation become stable when we apply inverse BLT and then apply TPT (essentially applying BLT again). ?
Let me quickly give an overview. The traditional process of BLT is transforming the transfer function of the filter and then picks up an essentially randomly chosen (direct form) topology, which gives the required transfer function. In the LTI case the reasoning is mostly correct, however it falls apart once we leave the LTI domain, since we can't really use the concept of transfer functions anymore (as you mentioned). So the filter is kinda free to do whatever it wants. In fact for a given transfer function there are an infinite number of different topologies implementing this function, and an infinite number of possible time-varying behaviors, some of which may preserve the time-varying stability and some do not.

OTOH, TPT doesn't deal with frequency domain and works directly in time domain, no transfer functions are involved (we can use them post-factum to analyze the transformation result in the LTI case, but it's not used in the transformation itself). Instead, the transformation directly tries to emulate the time-domain behavior of the analog circuit. Therefore we can reasonably hope that the time-variant behavior will be also more or less correctly modeled, and this is also confirmed by numerous experiments. Also, as the sampling rate grows, our emulation becomes more and more precise, thus the discrepancies to the analog case, if any, should progressively vanish.

Now, for analog (continuous time structures) it can be easily shown (e.g. see the book 2.16) that cutoff modulation doesn't affect the filter stability (under certain conditions, which normally hold). Under general modulations analog structures do not have to be stable, but some of them are. So, since TPT closely emulates the analog behavior, we can hope that the same qualities are preserved after the transformation. Further, as mentioned, Aaron Wishnick explicitly showed the time-varying stability of digital TPT SVF.

To sum up: TPT filters are stable under cutoff modulation with very high probability, and even if they are not, they should become more stable with SR increase. Some of them can be further shown to be perfectly stable.

I didn't read the details of your original post, but is there any specific reason that you're starting with a discrete-time filter? If you are using any of RBJ filters, they are all derived from analog prototypes, so you're doing the same work backwards to reobtain the prototype and then again in the TPT way. It'd be easier to simply start with an analog filter (the math is also much simpler).

Edit: I'm a bit puzzled trying to decipher the TPT filter in your code. That's not to say that it's wrong, I simply don't know. Usually the free term in the transfer function denominator would be normalized. If it's not, it might hint to cutoff modulation happening at a non-standard position. Maybe it doesn't, but I'd need to figure out how to transform the coefficients which you're using to something closer to the standard form.
Last edited by Z1202 on Fri Apr 26, 2019 7:38 am, edited 1 time in total.

Post

S0lo wrote: Thu Apr 25, 2019 8:35 pm I don't know if you've read the posts before mine. Let me get back to the original point here and ask a question. Why would a pole-zero filter that was originally unstable at fast modulation become stable when we apply inverse BLT and then apply TPT (essentially applying BLT again). ?
For LTI systems, you can (for the most part) evaluate stability just by looking at the poles of the transfer function. However, modulation is not LTI and hence this approach is no longer sufficient and the actual state-variables of the system become important: in the time-varying case, it's the values you store from one time-step to another are what ultimately determines whether the filter remains well-behaved or not.

Post

Z1202 wrote: Fri Apr 26, 2019 7:31 amOTOH, TPT doesn't deal with frequency domain and works directly in time domain, no transfer functions are involved (we can use them post-factum to analyze the transformation result in the LTI case, but it's not used in the transformation itself). Instead, the transformation directly tries to emulate the time-domain behavior of the analog circuit.
hmm, That basically confirms what I found out in the paper I posted. I've done TPT in time domain, it's stable. Then did BLT in frequency domain, it's not stable. Then I did something different, I found (in time domain), a way to represent the same analog structure WITH feedback delay from the output!!. And it's stable!!. Check the latter part of the section called "Exposing the guts" where I show that monster formula. It's basically a direct form representation of the TPT structure.
Z1202 wrote: Fri Apr 26, 2019 7:31 am Now, for analog (continuous time structures) it can be easily shown (e.g. see the book 2.16) that cutoff modulation doesn't affect the filter stability (under certain conditions, which normally hold).
Fortunately I've read that section of the book. I happen to mention that in the paper too.
Z1202 wrote: Fri Apr 26, 2019 7:31 am I didn't read the details of your original post, but is there any specific reason that you're starting with a discrete-time filter? If you are using any of RBJ filters, they are all derived from analog prototypes, so you're doing the same work backwards to reobtain the prototype and then again in the TPT way. It'd be easier to simply start with an analog filter (the math is also much simpler).
I did touch on that on the paper. And explained a bit why I'm doing it. I'm actually not starting with RBJ. I'm starting with basic Z-plane analyses. So to put arbitrary poles and zeros exactly where I want them to be. What bugled me, (in the first place when I didn't know all this), is why every time I try to modulate those types of filters, they do explode.

And then I learned about TPT and all that. Which works fantastically well. But still relied on analog structures. So coming from that perspective, and having the software tools (which is not free) and experience to do Z-plane analyses. I find it a bit limiting to rely only on analog structures, in a way it's like building things from lego blocks (which is fine). But I would also like to have another more abstract tool.

Edit: Or may be the problem just bothered me too much for such a long time that I'm so inclined now to try to solve it :)
Z1202 wrote: Fri Apr 26, 2019 7:31 am Edit: I'm a bit puzzled trying to decipher the TPT filter in your code. That's not to say that it's wrong, I simply don't know.
The analog structure I'm using was mentioned by a fellow member here earlier. The coefficients here are directly from the S-domain transfer function.

Image
Z1202 wrote: Fri Apr 26, 2019 7:31 am Usually the free term in the transfer function denominator would be normalized. If it's not, it might hint to cutoff modulation happening at a non-standard position.
The original Z-domain transfer function is a simple two pole conjugate pair placed at radius R with a cut-off frequency cut. Where the term x is the real component x = cos(2.0*PI*cut).

Two zeros are placed at nyquist. And the filter is normalized at frequency 0Hz (z=1) so that the gain is always 0db at that frequency.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

@S0lo. I'll try to take a look at your paper later. Two notes:

1) I'm not sure the structure diagram you posted is cutoff-modulation-stable, because the modulation stability is guaranteed for having cutoff controls in front of the integrators only (see 2.7 of the book), whereas in your structure the cutoff control seems to be spread across the structure. Typically the cutoff control is assumed to be embedded into the integrators, and a and b coefficients controlling other aspects of the filter, e.g. resonance. Also see 2.16 explicitly requiring in the beginning: "Suppose the cutoff gain elements are always preceding the integrators and suppose all integrators have the same cutoff gain (that is, these gains always have the same value, even when modulated)."

2) There is a TPT way to control the cutoff of digital structures directly, which is based on the discrete-time cutoff substitution formula. It also preserves the original topology of the digital system. Essentially you can replace each z^-1 with a 1-pole allpass. By controlling the cutoffs of the allpasses you can control the cutoff of the entire filter. Taking TPT 1-pole allpasses you ensure stability in modulation. It's roughly described here
https://www.native-instruments.com/file ... caling.pdf
although the argument in the paper could have been done in a better way.

One more edit: literally skimmed though your article. A note: you can do pole-zero design in ZDF/TPT using analog plane exactly the same way how you do pole-zero in z-plane. And I think the reasoning is usually easier in the s- than in z-plane, since the frequency axis is fully linear.

Post

Urgh.. some really needs to figure out an exact mathematical definition for this, because it's really getting silly.

Here's what I propose (without proof, but I'm fairly confident these are the sufficient and necessary conditions): if the transfer function of each of the state variables (eg. as measured from the capacitor voltages) can be written as a rational function of (s/w) without any other free instances of either s or w (where the latter is the variable cutoff), then the structure is unconditionally stable under cutoff modulation (for strictly positive finite w). If such a structure is numerically simulated using a consistent A-stable method (such as trapezoidal), then the simulation is also unconditionally stable under cutoff modulation (again, for strictly positive, finite w).

For emphasis: this is about the transfer functions at the state variables, not just the outputs of some macro-integrator blocks, because the internal gains are critically important.

ps. for numerical implementation, the "finite w" should be understood as "finite enough that floating point range doesn't become an issue" ... but like in practice 0.49 times the sampling rate only gives a pre-warped trapezoidal coefficient of ~31.8 which should generally be fine...

Post

mystran wrote: Fri Apr 26, 2019 2:34 pmif the transfer function of each of the state variables (eg. as measured from the capacitor voltages) can be written as a rational function of (s/w) without any other free instances of either s or w (where the latter is the variable cutoff), then the structure is unconditionally stable under cutoff modulation (for strictly positive finite w).
Well, the transfer function in question probably can be written as a function of s/w (although I didn't verify it). Which means that it can be implemented with cutoff gains preceding integrators, however it isn't, and I suspect this is the reason for the instability. Actually 2.16 of the book does make a statement similar to the one of yours (requiring identical cutoff gains to precede the integrators is pretty much the same as requiring H(s) to be a function of H(s/w)), would be nice to hear your comments on it.

Also, on a more formal level, I think it's not sufficient to require strictly positive finite w, rather there should be a strictly positive w0, such that w>=w0 (I refer to this as being "uniformly positive", although the term is not established). Otherwise you can have w approaching zero, which may result in infinitely growing signals.

Edit: ah, sorry missed that you were referring to the transfer function of each of the state variables. That might make the statement correct, although I'm not sure.

Edit2: I think it doesn't really hold, because you may have the output gains of the state-space system dependent on w, making the output signal arbitrarily large. However it might suffice to include the output transfer function into the requirements, dunno.

Edit3: now that I think again of it, maybe the uniform positiveness is not really needed. However the mapping proof becomes more complicated in that case. I thought I have a counterexample where the filter could have been made to explode if uniform positiveness is not guaranteed, but I can't remember it anymore. Maybe I did a mistake there.

Post

Z1202 wrote: Fri Apr 26, 2019 3:24 pm Edit2: I think it doesn't really hold, because you may have the output gains of the state-space system dependent on w, making the output signal arbitrarily large. However it might suffice to include the output transfer function into the requirements, dunno.
That's the thing though. As long as the state variables don't depend on w (other than by integration constant, ie. H(s/w) sense), it doesn't matter if output has a dependency on w, because that won't be enough to introduce additional energy into the feedback system by modulating the cutoff, but rather it's just equivalent to putting a separate gain multiplier after the filter proper.

Post

mystran wrote: Fri Apr 26, 2019 2:34 pm Urgh.. some really needs to figure out an exact mathematical definition for this, because it's really getting silly

Here's what I propose (without proof, but I'm fairly confident these are the sufficient and necessary conditions): if the transfer function of each of the state variables (eg. as measured from the capacitor voltages) can be written as a rational function of (s/w) without any other free instances of either s or w (where the latter is the variable cutoff), then the structure is unconditionally stable under cutoff modulation (for strictly positive finite w). If such a structure is numerically simulated using a consistent A-stable method (such as trapezoidal), then the simulation is also unconditionally stable under cutoff modulation (again, for strictly positive, finite w).
So basically if H(s * w) leads to w completely disappearing from the denominator of the transfer function, this can assure stability at modulation. if my understanding is correct, I'm inclined to be convinced that it's sufficient but not sure it's necessary.

This also means that we don't have to try to manipulate H(s). We can simply check H(s * w) and try to cancel out w. assuming here that w can also be some thing like cos(...)

Edit: may be we have to put the denominator in the form 1 + a s^1 + b s^2 +.... or alternatively 1 + a s^-1 + b s^-2 +..... before we do H(s * w)

The thing is, pole-zero filters can have radical H(s). As in my example. x doesn't cancel out from the denominator when evaluating H(s * x) despite the filters simplicity.

Again, if my understanding is correct :)
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

mystran wrote: Fri Apr 26, 2019 3:49 pm
Z1202 wrote: Fri Apr 26, 2019 3:24 pm Edit2: I think it doesn't really hold, because you may have the output gains of the state-space system dependent on w, making the output signal arbitrarily large. However it might suffice to include the output transfer function into the requirements, dunno.
That's the thing though. As long as the state variables don't depend on w (other than by integration constant, ie. H(s/w) sense), it doesn't matter if output has a dependency on w, because that won't be enough to introduce additional energy into the feedback system by modulating the cutoff, but rather it's just equivalent to putting a separate gain multiplier after the filter proper.
The output gains do not have to be proportional to w. They can be anything. Also, even if they are proportional, a 1000 times boost caused by cutoff change from 10Hz to 10kHz for all practical purposes might be considered as being unstable ;)

Actually I wonder how practical such criterion would be. In the end, isn't it equivalent to being able to factor w out of every row of the state matrix? (which is much easier to check)

Post Reply

Return to “DSP and Plugin Development”