When to choose a ZDF

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Hey all,

I've recently been learning about VA filter modeling and zero delay filter implementations. There are a ton of awesome resources and forum posts on the theory and the maths, but I haven't seen a concise answer to the question "When would it make sense to choose a ZDF?" So I want to outline the considerations that I've gathered from these various places and get your feedback on my attempt to answer this question.

It seems to me like these are the following considerations when choosing between a ZDF implementation and a more classic implicit method (like backwards euler or trapezoidal integration).
  • With a unit delay in your feedback path (implicit), a filter is more prone to instability with the pole/zero coefficients.
  • Implicit methods yield distortion in the higher frequencies (as you approach nyquist), which can be attenuated with oversampling, but which can be more or less avoided with a ZDF.
  • (Maybe similar to the previous...) A ZDF gives you better accuracy with the cutoff frequency and resonance (for 2+ order filters), especially near Nyquist.
  • A ZDF yields a smoother time-varying filter response than an implicit filter method with a unit-delay.
So, first things first, are these bullets accurate? Please correct me if I'm wrong here. Then, does this effectively enumerate the considerations when choosing which type of filter to use in your project? From what I've read, a ZDF alone doesn't necessarily "sound more analog," but it is definitely a more accurate analog simulation.

Thank you!

Post

There are some problems here, possibly caused by a terminological confusion, I'm not sure exactly what:

With a unit delay in your feedback path (implicit), a filter is more prone to instability with the pole/zero coefficients.

Implicit methods yield distortion in the higher frequencies (as you approach nyquist), which can be attenuated with oversampling, but which can be more or less avoided with a ZDF.

(Maybe similar to the previous...) A ZDF gives you better accuracy with the cutoff frequency and resonance (for 2+ order filters), especially near Nyquist.

A ZDF yields a smoother time-varying filter response than an implicit filter method with a unit-delay.
The words "implicit/explicit" in that literature are used in the sense mentioned here https://en.wikipedia.org/wiki/Explicit_ ... it_methods and "ZDF" basically means finding a solution to an implicit equation. It may further be used in a sense to refer to certain other design criteria imposed upon the solution( https://www.native-instruments.com/file ... pology.pdf ) , implicitly. (It's even further implicit in that in practice it often refers to simulation of a moog synth filter without explicitly mentioning it each time.) These are some of the different senses it may be "implicit" about.

A further distinction can be made between analytical and numerical/iterative solutions as mentioned here https://math.stackexchange.com/question ... o-problems

A "closed form" solution refers to the same concept as "analytical solution" (https://en.wikipedia.org/wiki/Closed-form_expression )

Basically that's the terminology and there may even be further names that refer to the same concepts. Application of each method yields a solution with different properties.

That literature obviously has much more content than simply noting these, but without noticing the proper interpretation of the terminology it sounds more mystical than it actually is.

Returning to the paper(s) that has possibly confused you, read these sentences from the above paper again:
A trivial approach is commonly in use here, to simply put additional z−1 delays into the offending feedback paths. This often results in acceptable transfer functions, except for the high frequency areas and/or certain values of system parameters, where the transfer function gets distorted beyond reasonable, often leading to unstable systems.
As you can see these sentences refer to an explicit solution, not "a traditional implicit one".
~stratum~

Post

Basically three of your four points above are sort of misunderstandings. So here are just a few quick and very raw remarks (I'll be corrected where they are not quite right):

With a unit delay in your feedback path (implicit), a filter is more prone to instability with the pole/zero coefficients.

Nope. [1] In simple words (as stratum already pointed above) the numerical stability is mostly just a matter of replacing `y[n] = k*y[n-1];` feedbacks with `y[n] = y[n-1] - k'*y[n-1]; k' = 1 - k;` (same for higher-order feedbacks where possible). The technique is not unique to any specific topology and the unit delay is still there.

Implicit methods yield distortion in the higher frequencies (as you approach nyquist), which can be attenuated with oversampling, but which can be more or less avoided with a ZDF.
'
Nope. You do not get any "new" or "better" magnitude curves. It's still the same filter, it's just the different topology (i.e. signal flow).

A ZDF gives you better accuracy with the cutoff frequency and resonance (for 2+ order filters), especially near Nyquist.

Not quite. You do get better accuracy for numerically-unsafe settings ( -> low-frequency and/or high-resonance) but it's mostly a matter of [1] above (and has almost nothing to do with "Nyquist").

A ZDF yields a smoother time-varying filter response than an implicit filter method with a unit-delay.

Yes. Better time-varying behaviour (but it's not because of implicit/explicit bla-bla-bla, this is the result of certain topology constraints - like [1]-thing above, then keeping feedback coeffs within the same scatter-junction and so on.
TPT technique does tend to end with less or more optimal solution in this regard, but this is not unique to this technique nor guarantees any best time-varying stability right out-of-the-box - as far as I understand it actually depends on the topology of the continuous filter you transform. So when it comes to these things it makes sense to point to a specific filter structure - e.g.: Andrew's "SVF", some structure in the Vadim VA book, etc, etc. Just "ZDF" on its own means really nothing.

---
edit: fixed wrong [2] markers (it's correct [1] now).
Last edited by Max M. on Sat Nov 25, 2017 2:40 am, edited 1 time in total.

Post

Max M. wrote:With a unit delay in your feedback path (implicit), a filter is more prone to instability with the pole/zero coefficients.

Nope. In simple words (as stratum already pointed above) the numerical stability is mostly just a matter of replacing `y[n] = k*y[n-1];` feedbacks with `y[n] = y[n-1] - k'*y[n-1]; k' = 1 - k;` (same for higher-order feedbacks where possible). The technique is not unique to any specific topology and the unit delay is still there.
If we are modeling a topology with instantaneous feedback loops adding a unit delay will alter the pole locations and may well result in instability as we may push the poles out of the unit circle. This has nothing to do with numeric stability though.
Max M. wrote:Implicit methods yield distortion in the higher frequencies (as you approach nyquist), which can be attenuated with oversampling, but which can be more or less avoided with a ZDF.
'
Nope. [2] You do not get any "new" or "better" magnitude curves. It's still the same filter, it's just the different topology (i.e. signal flow).
Sure if we use BLT to derive a filter the frequency response is 100% the same as a ZDF filter (assuming bilinear aka trapezoidal integrators). However if we are modeling a filter topology then, as mentioned above, adding unit delays to feedback loops changes pole locations and alters the frequency response.
A ZDF yields a smoother time-varying filter response than an implicit filter method with a unit-delay.
At it's core most of the ZDF you find (based on Vadim's or similar methods) are based on tDFTII integrators. These are very simple structures and as such the filter states don't get mangled too much under modulation as some other filter structures might. It has also been reasoned that as the topology matches the analog counterpart then the structure should model the modulated behavior, which was presumably decent in the original circuit. (although as pointed out you seem to be mixing up implicit/explicit)

Post

matt42 wrote:With a unit delay in your feedback path (implicit), a filter is more prone to instability with the pole/zero coefficients.
...
If we are modeling a topology with instantaneous feedback loops adding a unit delay will alter the pole locations and may well result in instability as we may push the poles out of the unit circle. This has nothing to do with numeric stability though.
You're right. I guess I simply misunderstood that paragraph. (I just did not assume that a broken filter where instantaneous feedback is replaced with a unit delay is what can be treated as an "implicit" method, and that the stability of this broken filter is to be ever considered at all... Though they (we) often did this for some filters (phasers? moog-ladder?) earlier...)

Post

Max M. wrote:I guess I simply misunderstood that paragraph.
Yeah, I think it's confusing because he uses the term implicit for the unit delay case. I'm assuming it's just the wrong term, but maybe he means something else and I'm not catching on.

Post

Yeah, I think it's confusing because he uses the term implicit for the unit delay case. I'm assuming it's just the wrong term, but maybe he means something else and I'm not catching on.
That's what I had told, that case is the explicit solution, because introducing Z^-1 into the systen block diagram means previous value of the system output is being used during the computation of the new value of the system output. Therefore useage of the word "implicit" in that sentence is wrong.

Whats makes the matter more confusing is that even the implicit equation has a variable about the previous system state even if it's not the system output, and that variable is named "s" in the paper above, the state of the integrator block.

Code: Select all

y = f(x−ky) = g(x−ky)+ s 
(eq.1 in Vadim's "keeptopology.pdf" paper. )

While this equation is being solved for y, the fact the value s from the previous system state is being used is ignored, and the problem is not mentioned. What to do about it is explained here http://urs.silvrback.com/zero-delay-feedback but not in the paper.
The fun bit about the s (state, sC, sum) thing is, we can decouple the calculation of the filter output from the actual integration step, i.e. summing the virtual currents into the virtual capacitors. In fact we can use the very same filter algorithm regardless of choice of integration method, specifically for single step integration methods auch as Backward Euler and bilinear/trapezoidal integration. For the Euler ones we would simply set s[n+1] to y[n] after each sample step, which produces that legendary unit delay for the integration step - but not for the computation of the filter output!
i.e. hockus pocus and a miracle occurs here, I guess in timeless engineering wisdom one has to first know what to ignore.
~stratum~

Post

@stratum @matt42 @Max M. Thank you guys, that was very helpful. Indeed, I had the terminology confused.

I’ve gone back through the papers with a better understanding of the terminology and I feel like I’ve cleared a lot of my confusion up.

So, let me try to take a pass at my original questions with an updated understanding. First, Matt’s point here seems to explain each of the first three bullet points I was trying to ask about, where the comparison I was trying to make was between an implicit/ZDF approach vs. an explicit unit-delay approach for the same filter topology:
If we are modeling a topology with instantaneous feedback loops adding a unit delay will alter the pole locations and may well result in instability as we may push the poles out of the unit circle.
Going back and reading through these resources again, I’ve found additional confirmation of this point as one of the “unpleasant side effects” of using a unit-delay in the feedback path (i.e. in Urs’ article here https://urs.silvrback.com/zero-delay-feedback).

As for my last bullet, I actually think Matt and Max have given a proper answer to the question I was originally trying to ask, which is just on the time-varying behavior of an implicit model vs. that of an explicit unit-delay model.

So at this point, I think my high level question basically boils down to this: As you’re designing a system, and come upon the opportunity to implement a particular filter either with an implicit equation or with an explicit unit delay, what factors do you weigh in making your decision? Would you make the decision based only on the stability requirements of your filter and the desired accuracy of the pole locations? Or perhaps I have this backwards to begin with; perhaps implicit solutions tend to arise only when trying to model an analog circuit with an instantaneous feedback loop to yield a more accurate simulation?

Thanks again.

Post

My guess is that insertion of explicit unit delay during modeling of an analog circuit was carelessness caused by confusion which in turn was caused by the overall complexity of what is being tried to be accomplished, and the reason is that it simply seems foolish to do so. DSP takes a lot of time to learn and meanwhile you do many things without actually knowing why, simply because that's the way the literature is. It turns out, experts sometimes get confused too, because when they were students nobody actually taught them anything and just wrote formulas on the board and years later it became an habit and established practice.

After realizing that the unit delay does not need to be inserted and how foolish that was, the rational next step would be to turn back to drawing board, start from scratch and do your own analysis instead of following the traditional derivation of recursive filters and the related literature, because the foundation of that literature has entirely different concerns.

What all this means is that when you want to simulate analog circuits and filters the relevant literature is numerical analysis texts, basic circuit theory and papers/books about ZDF, not DSP texts, surprisingly. I guess in turn this answers your question about "when to use them". The reason is that these systems are not linear and time invariant.
~stratum~

Post

Hmm, the (linear) SVF algorithm proposed by Chamberlin yields a filter which becomes instable above Nyqvist/3. That's because it uses explicit Euler for integration. Moving the same algorithm to trapezoidal integration yields a filter which is stable up to Nyqvist.

In my opinion, for linear filters there's really no reason to go explicit ever. Unless of course you want a specific (static?) filter which you just lift off the internet or some text book.

As for non-linear filters, I guess it always depends on what you can get away with in your application.

Post

Moving the same algorithm to trapezoidal integration yields a filter which is stable up to Nyqvist.
And this is so simply because it makes a lot less errors in numerical integration.

Which is just another indication of the fact that the people who were working on this were just reading the wrong books. It's interesting to see that the solution only took correction of two trivial errors (it's difficult to see these in Vadim's books because he wouldn't state this in this way (possibly because of the scholarly style an article/book requires) and something very important is lost because of that - thanks for clarifying that with a blog:) ).

The interesting part begins after the solution - i.e. the analysis of the resulting solution, because the literature apparently had no answer to that.
Last edited by stratum on Mon Nov 27, 2017 11:18 am, edited 1 time in total.
~stratum~

Post

stratum wrote:
Moving the same algorithm to trapezoidal integration yields a filter which is stable up to Nyqvist.
And this is so simply because it makes a lot less errors in numerical integration.

Which is just another indication of the fact that the people who were working on this were just reading the wrong books. It's interesting to see that the solution only took correction of two trivial errors. The interesting part begins after that - i.e. the analysis of the resulting solution, because the literature apparently had no answer to that.
Off the top of my head the general amount of error done by backward- or forward Euler (corresponding to rectangular method of integration) is the same as for trapezoidal integration. For a single slice trapezoidal might produce better precision, but once you integrate over multiple slices, if I'm not mistaken, the additional errors cancel and the results will differ only on the ends of the range (thus the relative error vanishes as the range grows indefinitely).

The actual reason that trapezoidal integration is stable is not directly related to the precision of the integration, but to the transformation of the s-plane that it implies.

Never thought about this in depth, but it also seems to me that precision of integration has little if anything to do with preserving stability. It would be interesting to check if higher order integration methods (which give precision higher than trapezoidal) result in stable filters. I think most of those methods can't be implemented in practice due to non-causality of integration (resulting in infinitely large lookahead necessary), but that shouldn't prevent the possibility of theoretical analysis of stability of the resulting filters. I vaguely remember looking at this a quite a few years back and I think this was exactly my conclusion, but I don't remember what exactly I analyzed, also I could have made some mistakes.

Post

Are you sure this isn't because the fact that errors can cancel other errors isn't the same thing as being accurate about the very moment the equations describe? i.e lets assume the Euler method made +x amount of errors during this particular summation slice and about -x amount of errors in some other. Whereas the trapezoidal rule makes much smaller errors in each time. As long as the eventual result is concerned, both may look as if they have the same amount of precision, whereas the trapezoidal rule tracks the what happens at the moment more accurately? i.e. using an integrator with less instantaneous precision looks like as if we are injecting noise near Nyquist. Random errors that do not add up in the same direction may not cause instability, that seems like reasonable conclusion and yet noise isn't particularly desirable especially when it may be amplified with a feedback mechanism. What happens when the amount of feedback gain is large enough and the noise is large enough so that the result happens to be positive feedback at that moment whereas it wouldn't be in the analog version because in that case noise would be purely random whereas the noise injected by the integrator is not independent of the system state? Don't know, I'm just speculating, obviously. Perhaps this could be analysed in terms of 'the transformation of the s-plane', this is the part I'm not capable of reasoning about.
~stratum~

Post

stratum wrote:Are you sure this isn't because the fact that errors can cancel other errors isn't the same thing as being accurate about the very moment the equations describe? i.e lets assume the Euler method made +x amount of errors during this particular summation slice and about -x amount of errors in some other. Whereas the trapezoidal rule makes much smaller errors in each time. As long as the eventual result is concerned, both may look as if they have the same amount of precision, whereas the trapezoidal rule tracks the what happens at the moment more accurately? i.e. using an integrator with less instantaneous precision looks like as if we are using a lower sampling rate.
Your reasoning does have a point, but still I don't think there is a direct relationship between the precision of integration and stability. After all, trapezoidal integration is not the most precise one, still it exactly preserves the stability.

Consider e.g. Simpson's rule instead of trapezoidal:

Code: Select all

y[n] = y[n-1] + ( x[n] + 4x[n-1] + x[n-2] ]/6
y*(1-z^-1) = x * ( 1 + 4z^-1 + z^-2 ) /6
y/x = 1/6 * ( 1 + 4z^-1 + z^-2 ) / (1-z^-1)
1/s = 1/6 * ( 1 + 4z^-1 + z^-2 ) / (1-z^-1) = 1/6 * (z^2 + 4z + 1) / (z-1)
s = 6 * (z-1) / (z^2 + 4z + 1)
As one can see s=-1 is mapped into z=-10.5, z=0.5. This means the resulting structure (unless I did some mistakes, as this was done very quickly without much thinking ;) ) is not stable.

Post

One can argue that Simpson's formula doesn't have a proper latency. Well, trying out Simpson's 3/8 rule with latency correction one has

Code: Select all

y[n] = y[n-1] + ( x[n+1] + 3x[n] +3x[n-1] + x[n-2] ]/8
s = 8 * (z-1) / (z^2 + 3z + 3 + z^-1)
where s=j is mapped (among others) to z=-3.6-8.6j

Post Reply

Return to “DSP and Plugin Development”