ZDF with Inverse Laplace Transform

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

mystran wrote:That said, if the feed-forward part goes first, one can use the full BLT with pre-warping and all directly on the integrators, which makes it a rather more fool-proof approach (and you can easily mess with the state too, make the integrator non-linear, whatever).
I'm not sure what you mean by that. Would you care to illustrate?

Edit: oh, do you mean embedding the cutoff into a formal DF/TDF integrator? I'm not sure why would one want to do that, if one could use the approach of Fig.3.11, having an explicit cutoff gain in series with the integrator (which is equally applicable to any integrator form). Can you get more foolproof than that?

Post

Z1202 wrote: Edit: oh, do you mean embedding the cutoff into a formal DF/TDF integrator? I'm not sure why would one want to do that, if one could use the approach of Fig.3.11, having an explicit cutoff gain in series with the integrator (which is equally applicable to any integrator form). Can you get more foolproof than that?
If you treat the "cutoff" as a pre-warping coefficient of the integrators, you can work in terms of normalised analog frequencies, but more important it greatly simplifies the treatment of situations where you have (analog) poles at multiple frequencies, but want to use a single pre-warping constant (to properly transform the complete system).

Post

mystran wrote:
Z1202 wrote: Edit: oh, do you mean embedding the cutoff into a formal DF/TDF integrator? I'm not sure why would one want to do that, if one could use the approach of Fig.3.11, having an explicit cutoff gain in series with the integrator (which is equally applicable to any integrator form). Can you get more foolproof than that?
If you treat the "cutoff" as a pre-warping coefficient of the integrators, you can work in terms of normalised analog frequencies, but more important it greatly simplifies the treatment of situations where you have (analog) poles at multiple frequencies, but want to use a single pre-warping constant (to properly transform the complete system).
I'm afraid I'm still not quite following you. In Fig.3.11 you have a single coefficient wT/2 which is simply the value of tan(wT/2) after prewarping. If you're having multiple cutoffs, then you simply multiply the prewarped value wT/2 by the respective cutoff ratio.

E.g. your prewarping point is w0. Then it prewarps as w0'T/2=tan(w0T/2). For all further cutoffs wi, you set the respective coefficients to wi'T/2 = w0'T/2 * wi/w0. It's 100% intuitive, if you think about it as simply preserving the cutoff ratios.

Edit: the background (on the intuitive level) is like follows. You replace each cutoff gain wi by wi/w0 * w0. Then you have all integrators sharing the same cutoff gain w0 plus there are some additional gain elements in series. You prewarp the w0 gains while (naturally) keeping all other gains in the system (like the feedback gains or the gains wi/w0) untouched.

Edit2: the normalized scale(s) can be achieved by setting T' or T (or both) in w0'T'/2=tan(w0T/2) to unity.

Post

Z1202 wrote:I'm afraid I'm still not quite following you. In Fig.3.11 you have a single coefficient wT/2 which is simply the value of tan(wT/2) after prewarping. If you're having multiple cutoffs, then you simply multiply the prewarped value wT/2 by the respective cutoff ratio.
I'm sure this is just another one of those "mental model incompatibilities" similar to how I tend to prefer matrix-equations over those confusing, hard to trace diagrams said book is full of. :)

Either way, I tend to treat the pre-warping (ie. 2*tan(wd/2)/wa for T=1) as a matter of setting a suitable integration constant as a parameter of the numerical integration, rather than a parameter of the actual system. I find this fits well with my mental model (I don't need to worry about, as I know everything gets shifted around equally when I change that one constant and where the actual frequencies actually come from is of no particular concern), whether working with abstract analog prototypes or actual circuits with real-world component values or particle physics simulation... or whatever else.

I find that working this way reduces manual errors when doing things manually. I also find that it maps very well to automation (since you just transform everything as-is with integrators carrying the "discretisation detail" with them). I usually try to let a program figure out the whole thing anyway as soon as a problem has more than 2-4 dimensions (since by then it takes about as long to do it manually as it takes to write a Python or Lua script to automate) and I don't really want to worry about trying to tell a program how to adjust coefficients, so I just let it use an suitable run-time constant on every integrator.

Maybe treating the integration constants as system parameters instead makes sense when working with box-and-pointer diagrams instead? I don't know. Or maybe it's an orthogonal difference in mental models. In any case, if you pick the pre-warped BLT integrator from Wikipedia and throw it into a DF2 you'll get things wrong. If you use TDF2 you'll almost certainly get it right.

So I would still recommend TDF2 integrators for everyone, simply because it's fairly hard to get them wrong (at least unless you actually try), but I now appreciate that DF2 can also be made to work (even though historically we have seen it done wrong) if one is careful to move the scaling from the output to the input or if one equivalently follows uses a mental model where the pre-warping is part of the system rather than the integration (which makes no sense to me, but who knows).

Post

mystran wrote:...or if one equivalently follows uses a mental model where the pre-warping is part of the system rather than the integration (which makes no sense to me, but who knows).
For me the most intuitive understanding of prewarping is the following. If I simply apply BLT (or BLT TPT) to a system, the system will have a different response, the difference becoming larger closer to Nyquist. If the "most interesting" part of the response (e.g. the resonance peak) is close to Nyquist, this peak will be quite off the original (analog) position. However, for some other cutoff setting the discrete-time response peak will be exactly at the analog position. The prewarping formula simply gives you that cutoff.

As for equations vs block diagrams, each way has its benefits. For me, block diagrams often reveal things not easily visible in the matrix equation form. E.g. the feedback path of the Moog ladder filter would imply certain regularity in the system, so that one needs to solve only 1-st order linear equations (once for the 1-poles, once for the global feedback), rather than inverting a 4x4 matrix. It's also usually easy to find the applicability range of trapezoidal integration (the boundary manifests itself in the graphical form as "instantaneously unstable feedback"). The TDF2 trapezoidal integration scheme is obvious in the graphical form, but not so obvious in the equation form, etc.

YMMV :wink:

Post

Clash of the titans in this thread :)

Based on the responses, I'll try to confirm if the following is right:

1. If the filter is designed in the S-domain, it will eventually need to be transferred into Z-domain anyway? (BLT was mentioned) If this is the case, I might equally well place the poles/zeros in Z-domain to begin with.

2. Quickly browsing Vadim's book based on his comments, am I right assuming that the following two trapezoidal integrators can actually be used equally well for ZDF purposes with equivalent (or even identical) results? The integrators in questions can be seen in figures 3.9 and 3.12. There seems to be some disagreement about it in this thread.

3. Regardless of how I come up with the transfer function, in the end I always need to choose one topology / filter structure which I'll use to implement the filter. Just like with traditional non-ZDF filters which you design for Z-plane? So there's no "you just magically integrate the signal into discrete samples".

4. S-plane and Z-plane filter structures are identical, differing only by the integrator/delay block? This is based on the comments that BLT would need to be used for integrators.

5. The actual integrator implementation can be completely ignored when deriving the implicit functions so that the result isn't used in the right hand side of the formula? So basically the resulting function/formula is completely separate from the integrator block implementation? The integrator blocks could have a built in Pacman game in them and it wouldn't change the filter function/formula at all?

6. Any existing filter can be made into ZDF if it uses TDF2 or DF1?
Misspellers of the world, unit!
https://soundcloud.com/aflecht

Post

Kraku wrote:1. If the filter is designed in the S-domain, it will eventually need to be transferred into Z-domain anyway? (BLT was mentioned) If this is the case, I might equally well place the poles/zeros in Z-domain to begin with.
Digital filters exist in Z-domain. So if you want a digital filter you need to get into Z-domain. However you can't "equally well" place poles and zeros in the Z-domain. The point of ZDF is using good topologies, and those topologies are typically analog ones. Even if you take their TPT-transformed Z-domain counterpart, figuring out how to directly control the pole/zero positions would be rather involved, it's much easier done in analog domain. Also the whole transfer function algebra is (IMHO) much easier and more intuitive in analog domain, while in the case of BLT the response difference is always simply the tangent prewarping (also very easy to keep in mind). All in all, it's (IMHO) much easier to think in analog terms all the way, while simply assuming the TPT transformation to the digital model.
2. Quickly browsing Vadim's book based on his comments, am I right assuming that the following two trapezoidal integrators can actually be used equally well for ZDF purposes with equivalent (or even identical) results? The integrators in questions can be seen in figures 3.9 and 3.12. There seems to be some disagreement about it in this thread.
I believe we resolved the disagreement. 3.8 through 3.11 will do (but keep in mind that pictures 3.8 through 3.10 do not contain the cutoff gain, it'll need to be added. However, all other things being equal, 3.11 is the best option.
3. Regardless of how I come up with the transfer function, in the end I always need to choose one topology / filter structure which I'll use to implement the filter. Just like with traditional non-ZDF filters which you design for Z-plane? So there's no "you just magically integrate the signal into discrete samples".
Exactly. Even if you use equation system as a representation (instead of block diagram), the topology is still implicitly contained there. So, in principle you could integrate the equation system (not just the signal) using the trapezoidal integration technique. But by having an equation system you already specify the topology.
4. S-plane and Z-plane filter structures are identical, differing only by the integrator/delay block? This is based on the comments that BLT would need to be used for integrators.
Correct (as long as we're talking about the TPT method)
5. The actual integrator implementation can be completely ignored when deriving the implicit functions so that the result isn't used in the right hand side of the formula? So basically the resulting function/formula is completely separate from the integrator block implementation? The integrator blocks could have a built in Pacman game in them and it wouldn't change the filter function/formula at all?
Sorry, I didn't understand that question.
6. Any existing filter can be made into ZDF if it uses TDF2 or DF1?
Also didn't understand this one. Any what kind of filter? Analog or digital?

Post

Here's my thoughts (maybe there's some additional insights) and sorry if there's overlap as we were apparently writing these at the same time.
Kraku wrote: 1. If the filter is designed in the S-domain, it will eventually need to be transferred into Z-domain anyway? (BLT was mentioned) If this is the case, I might equally well place the poles/zeros in Z-domain to begin with.
Only if you want to implement it as a digital effect. You can always build the analog filter too. ;)

As far as placing poles/zeroes goes, see the comments (eg. mine below) about analog s-plane being much easier geometry to work in.
2. Quickly browsing Vadim's book based on his comments, am I right assuming that the following two trapezoidal integrators can actually be used equally well for ZDF purposes with equivalent (or even identical) results? The integrators in questions can be seen in figures 3.9 and 3.12. There seems to be some disagreement about it in this thread.
I assume you mean 3.11 since 3.12 is a complete one-pole low-pass.

The forms shown in 3.9 and 3.10 are equivalent. However, the substitution of the "embedded cutoff gain" wc*T/2 in place of 1/2 that is done from 3.10 to 3.11 should not be done in the same way (as a simple substitution) for 3.9, otherwise one gets the problems we've been discussing (however, there is no such "unsafe" substitution pictured in the book). For 3.9, one should first move the 1/2 multiplier from the output to the input (ie. same side as 3.10) at which point the substitution should be fine again (or one can add wc*T in front and keep the 1/2 afterwards; the point is you don't want changing coefficients at the output).
3. Regardless of how I come up with the transfer function, in the end I always need to choose one topology / filter structure which I'll use to implement the filter. Just like with traditional non-ZDF filters which you design for Z-plane? So there's no "you just magically integrate the signal into discrete samples".
There is no verified scientific evidence of any magic. :)

Pretty much the only alternative to choosing an implementation structure is to sample the impulse response (from the transfer function) directly until it decays into a noise and then use the resulting IR for FIR filtering like any other... but generally speaking it's easier to just pick some implementation structure. For 2nd order sections (or filters that you can reasonably decompose into 2nd order sections) it's usually safe to choose SVF (the ZDF transformed analog SVF, not the classic digital Chamberlin thing) unless you have some specific reason to use something else.

edit: I'd also like to add that your favourite math package can help you with numerical root-finding to decompose arbitrary filters into 2nd order sections... although I'd recommend working with something more straight-forward first
4. S-plane and Z-plane filter structures are identical, differing only by the integrator/delay block? This is based on the comments that BLT would need to be used for integrators.
On the continuous time S-plane (based on integrators 1/s) the filter response can be found on the imaginary axis, while on the discrete time Z-plane (based on unit-delays 1/z or z^-1 as it's usually written) it's found on the unit circle. This differences make the actual coefficients behave in very different ways (with the "linear" S-plane being much more straight-forward than the trigonometric Z-plane) and since the filter structures are based on different constructs (integrators vs. delays) they behave quite differently as well.

That said, when doing TPT style transforms, what we do is approximate the analog integrator with a "complete" digital filter (ie. the BLT integrator) that is only "internally" based on delays, which then allows using the continuous-time (analog) structures.
5. The actual integrator implementation can be completely ignored when deriving the implicit functions so that the result isn't used in the right hand side of the formula? So basically the resulting function/formula is completely separate from the integrator block implementation? The integrator blocks could have a built in Pacman game in them and it wouldn't change the filter function/formula at all?
I'm not sure if I really understand the question, but the idea is that you can build an "analog" design in continuous time (without worrying much about digital) and then "drop in" the "digital wannabe integrate" to arrive at the final digital filter (at least in the abstract sense; you then need to solve the resulting equations so you can implement them).
6. Any existing filter can be made into ZDF if it uses TDF2 or DF1?
Not really. Any existing filter (in the transfer function sense, analog or digital) can be implemented with a ZDF-style filter (at least in principle), but in the implementation structure sense you can build a ZDF filter out of any analog (well, continuous time) filter structure. An existing digital structure doesn't really do anything, you essentially have to start from scratch (although some of them are similar to related analog structures that one can transform).

Post

I'd like to add an opinion (and I think Vadim agreed above) that whenever a continuous time (ie. analog) design (+BLT) for a given (IIR) filter design problem is reasonable approach to solving said problem, it is usually several orders of magnitude easier and faster to solve it in continuous time and transfer the results over.

"Direct digital" design is only really worth it if you are trying to solve some problem that is fundamentally digital in nature (eg. trying to counter the frequency warping, or maybe abuse longer delays, etc) and the resulting math is generally horrible in comparison to the very straight-forward s-plane math.

Post

Z1202 wrote: As for equations vs block diagrams, each way has its benefits. For me, block diagrams often reveal things not easily visible in the matrix equation form.
Well, this was mostly a joke. The specific matrix notation that I normally use (unless there is a good reason for something else) is a sort of hybrid between a state-space and MNA style connectivity matrix integration is explicit, several dimensions often represent pure weighted summing nodes to simplify reasoning (since why not, these can be easily optimised out automatically) and inputs/outputs are simply represented as elements of the RHS/LHS vectors.

In short, it's essentially just an adjacency matrix of the signal-flow graph of a traditional block diagram with a few tweaks, I just happen to find the 2D table format a little easier to work with and reason about (plus it's then easy to feed it to a code-generator or dump it into a math package for analysis; doing it all manually is fun the first 5 times and then it just gets very boring).

Post

So let's see if I understand correctly. Here's an attempt at creating a 2 pole ZDF peak filter designed in Z-domain.

"Pole only" transfer function in Z-domain. For simplicity I've used Y and X instead of Y(z) and X(z).
Y / X = 1 / ((z - p)(z - c))

"p" and "c" are a complex conjugate pair. "r" is real part and "m" is the imaginary part:
p = r + m i
c = r - m i

When we substitute and expand everything, we'll get:
Y m^2 + Y r^2 - Y 2 r z + Y z^2 = X

Shuffling around the formula, we'll get:
Y z^2 = X - Y m^2 - Y r^2 + Y 2 r z

Let's multiply both sides with z^-2 so largest Y is the current output:
Y = X z^-2 - Y m^2 z^-2 - Y r^2 z^-2 + Y 2 r z^-1

or:

Y = X z^-2 - Y (m^2 + r^2) z^-2 + Y 2 r z^-1

or:

Y = X z^-2 + a2 Y z^-2 + a1 Y z^-1
a1 = 2 r
a2 = -(m^2 + r^2)

From here we can see how to implement this with TDF2 filter structure:

Image

Am I right that you just replace the delay blocks with trapezoidal integrators? If this is the case, then I assume the following holds true:

y = x + s1
s1 = a1*y + s2 ----------> (input for integrator s1)
s2 = a2*y + x ----------> (input for integrator s2)

From here it becomes hazy: how do I handle the integrator functions in the math? They're not just math formulas but actual C++. If I don't know this, I can't solve the implicit equations so that the right side of the formulas don't contain Y.
Misspellers of the world, unit!
https://soundcloud.com/aflecht

Post

To me it looks you're doing things upside down. You should start with an s-domain transfer function, express it in graphical form in terms of integrators (thus choosing a particular topology), then replace integrators with Fig.3.11 (not replace integrators with delays).

Post

Trapezoidal integrators are used to replace continuous time integrators, not discrete unit delays.

The TDF2 trapezoidal integrator is:

Code: Select all

//calculate coefficient
g = tan(pi*frequency/sample rate)
//initialse state
s = 0
//process loop
y[n]=x[n]*g+s //Calculates the filters output
s = x[n]*g+y[n] // updates state
This is derived from taking the BLT of the analogue integrator transfer function: 1/s (The Art of VA Filter Design has a lot more info on this).

For a peak filter the analogue transfer function is:

Code: Select all

 H(s) = (s^2 + s*(A/Q) + 1) / (s^2 + s/(A*Q) + 1)
//A = 10^(dBgain/40)
//Q can be just q or modified for bandwidth
We can get the coefficients to plug into the diagram I posted above simply by looking at the analogue transfer function. Denominator for the 'b's, numerator for the 'a's. In descending powers of s.

Code: Select all

a0 = 1 
a1 = (A/Q)
a2 = 1

b0 = 1
b1 = 1/(A*Q)
b2 = 1
I've added some points of reference to the circuit: y0, y1 and y2

Image

With the coefficients and integrator code we can make equations for y0, y1 and y2:

Code: Select all

y0 = input - (y1*b1 + y2)
y1 = y0*g+s1
y2 = y1*g+s2
Combine the three so we have one equation with only one output term

Code: Select all

y0 = input - ((y0*g+s1)*b1 + (y0*g+s1)*g+s2)
//s1 and s2 are the two integrator states
Now you just need to rearrange that, so y0 is only on the left hand side. Once you got that then just calculate y1 and y2 and update the integrator states (and apply final gain coefficients and output summation obviously).

I did this quickly, so there could well be errors :)

Post

Kraku wrote:So let's see if I understand correctly. Here's an attempt at creating a 2 pole ZDF peak filter designed in Z-domain.
I would actually highly suggest that rather than trying to struggle through forum discussion, you would pickup Vadim's book (the art of va filter design; it's free, it's good, it doesn't assume anything too fancy and all the stuff discussed is known to directly work in practice) and go through at least chapters 1,2,3 and 5 (you can skip 4 if you don't care about non-linearities and Moog-style ladders). If you already have basic understanding of traditional filter design, this shouldn't really take more than a couple of hours at best (there's a lot of nice pictures so it's not quite as long as the page count might suggest either). At that point you'll be able to design basic ZDF filters and it will be much easier for other people to help you much faster.

edit: however, actually read the text and it will all make a lot of sense :)

edit2: oh and while the book is called "of VA filter design" the techniques discussed are arguably very useful for any kind of music-related DSP processing, not just "virtual analogs" as such.

Post

Matt42's explanation looks mostly easy to follow, but there are one or two points which are still abit too alien for me to grasp properly.

I think I'll do what Mystran suggests and come back here to report how I'm doing. Then I can read this thread again from start to finish and see if there's still some new info here :)

Thank you everyone!
Misspellers of the world, unit!
https://soundcloud.com/aflecht

Post Reply

Return to “DSP and Plugin Development”