Book: The Art of VA Filter Design 2.1.2

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Sorry for being imprecise Vadim...
i'm quite lazy with ASCII math notation :oops: ,

The differential matrix represents the one pole filter as an autonomous cell and how it behaves during a time step t.

Code: Select all

       x    y 
dx/dt  0    0
dy/dt  a   -a
The input x, does not move so dx/dt = 0 * x + 0 * y (hence the name "step invariant" transform).
The output y, varies depending on the difference between x and y so dx/dt = a (x - y).

The exponentiation correspond accurately to what happens during a time step t.
The differencial "analog" matrix is M.
The transition "digital" matrix is exp(M * t). (the matrix exponentiation is the discretization step).

Code: Select all

exp(M*t) transition matrix for the low pass cell

     xn-1  yn-1
xn   1     0           input does not change  xn = xn-1 (step invariant)
      -at  -at
yn 1-e    e           output varies according to exp(-at) like a nice RC circuit


I have used the step invariant transform for low pass cells and for the state variable filter.
This method does not preserve topology, and it gets too complicated for filters with long feedback loops (such as the Moog filter).

I've also experimented with the "ramp" invariant transform (that considers the input as linearly varying during the sample step). This did not had noticeable effects on my low pass filters, but it improved slightly the high pass filters.

By the way, something i really like in your book is that you show the various integrators as real life stuff: with block diagrams. Most of the DSP books and tutorials show the different options as somewhat abstract Laplace to Z transforms. You give body to these transforms !

Did you experimented designs combining different kinds of integrators ?
See you here and there... Youtube, Google Play, SoundCloud...

Post

Smashed Transistors wrote:Sorry for being imprecise Vadim...
i'm quite lazy with ASCII math notation :oops: ,

The differential matrix represents the one pole filter as an autonomous cell and how it behaves during a time step t.

Code: Select all

       x    y 
dx/dt  0    0
dy/dt  a   -a
.....

Did you experimented designs combining different kinds of integrators ?
What I was referring to is that I believe you don't have to include x into the state vector, it's not part of the system state. Your matrix also has a rank of 1. So you don't even need to use matrix exponentials here.

As for the different integrators combined, I didn't feel this to be very interesting. BLT integrator is anyway "the best". Also, by combining different integrators you lose the s- to z- plane mapping representability of the discretization.

Post

You're right, i just put the constant input in the matrix to remind that we consider it constant during the sampling step... which causes severe phase distortion on the top 2 octaves and problems with the feedback.
In fact, the step invariant transform should only be used on the whole system, not locally => go for matrix exponentiation.

On the contrary, the bilinear transform can be used locally in order to preserve the topology of the system, which is necessary in order to add non-linearities in the system.

The bilinear transform enforces stability. The slight side effect of the bilinear transform is the frequency warping that affects the top octave.

You are right, mixing integrator elements would make s to z mapping difficult and stability issues rather difficult to predict. The Chamberlin State Variable Filter is a good exemple, it combines a backward and a forward Euler integrator, it optimises resonance but it can be unstable https://ccrma.stanford.edu/~dattorro/Ef ... nPart1.pdf https://ccrma.stanford.edu/~jos/svf/svf.pdf.
See you here and there... Youtube, Google Play, SoundCloud...

Post

I think the frequency axis warping with the BLT should be mostly negligible at 88kHz SR or above (from the audibility point of view). Since in VA you usually need high SR because of nonlinearities anyway, that seems good enough for me. It also perfectly preserves the amplitude-phase relationship, which is often very important.

As for the step invariant method, another problem is that the expressions get ill-conditioned when the poles coincide and you need to take proper care of that. My personal preference is to use BLTPT for audio and step-invariant method only for control signals (possibly).

Post

I agree with Vadim that trapezoidal integration is the best "default" integration method to use for audio processing as it has the best tradeoff between maintaining both the amplitude and phase which is important for resonant filters especially. Trapezoidal is A-Stable, so its fine for linear models, or weakly non-linear models where the effective cutoff frequency is modulated downwards by the non-linearities (look at the derivative of the non-linearity and if it is a bell shape you should be fine). For more difficult models you need either to add special limiting tricks to trapezoidal or switch to a method that is both A-Stable and L-Stable.
The Glue, The Drop - www.cytomic.com

Post

As far as i understood, unlike other integrators bilinear integrators can be chained:

The chain of bilinear integrators is equivalent to the bilinear transform of the chain of integrators.
Thus solving the feedback connection solves the whole feedback loop. This preserves the chain's topology (excepted for the feedback connection which receive feedback from multiple cells).

This is not true for Step invariant integrators.
The chain of step invariant integrators is not equivalent to the step invariant transform of the chain of integrators.
So, to solve the feedback loop, one needs to go the matrix way... which does not preserve the topology at all.
See you here and there... Youtube, Google Play, SoundCloud...

Post

You can chain any integrator type. The problem with the step invariant transform is that applying this transform to the integrators is not equivalent to applying it to the entire system. That is, this transform cannot be consistently defined in terms of the discretization of the integrators.

Post

Exactly so Vadim !
That's true, even for systems without feedback. The Euler integrator is not "naive" per se, it's the way they are chained that can be considered as "naive".

That's true even with the case of the chain of two step invariant integrators.

The step invariant transform of a single integrator is given by:

Code: Select all

diff. Matrix of the free running system during one time step:
        x  y  
 x' =  [0  0]     x is constant
 y' =  [a  0]     y is the integral of x with integration coeff a

The step invariant transition matrix is given by 
 
diff. Matrix            Transition Matrix
    [0 0]                 [1   0]
M = [a 0]     exp(t*M) =  [t*a 1]

which means
x_n+1 = x_n              (reminds us that it is step invariant)
y_n+1 = t*a*x_n + y_n    the output we care about

t: time step
a: integration coefficient
So, the implicit chain of two step invariant transformed integrators is

Code: Select all

  yA_n+1 = t*a * x_n    + yA_n
  yB_n+1 = t*a * yA_n+1 + yB_n     implicit
yB_n+1 can be expressed depending on yA_n by substitution:

Code: Select all

  yB_n+1 = t*a *      yA_n+1        + yB_n
                   /          \
                  /            \  
                 t*a * x_n + yA_n
                 
  = t^2*a^2 * x_n  + t*a * yA_n + yB_n 
Now take the step invariant transform of a chain of two integrators:

Code: Select all

    [0 0 0]             [1               0     0]
M = [a 0 0]  exp(t*M) = [t*a             1     0]
    [0 a 0]             [a^2*t^2*a*t/2  a*t    1]
here, we get

Code: Select all

yB_n+1 = t^2*a^2 * x_n / 2  + t*a * yA_n + yB_n
                      -----  
Even if it is quite similar to the former result, the x_n coefficient is not the same as the preceding (1/2 factor).

Contrarily to the bilinear transform and its trapezoid integrators, i don't see how the step invariant
transform could be applied locally.

Maybe there is some substitution/variable change à la Fontana that can provide a sort of topology preserving step invariant transform ?
See you here and there... Youtube, Google Play, SoundCloud...

Post

Smashed Transistors wrote:Contrarily to the bilinear transform and its trapezoid integrators, i don't see how the step invariant
transform could be applied locally.

Maybe there is some substitution/variable change à la Fontana that can provide a sort of topology preserving step invariant transform ?
Personally I'm not that much interested in the step invariant transform, since its effect on the amplitude response is less clear and it doesn't preserve the relationship between the amplitude and the phase which is also important once you begin to combine filters. Anyway, I doubt the local application of this transform is possible (in the sense that it can't be made equivalent to the global application). The main intuitive reason for that is the ill-conditioning of the algebraic expressions for the matrix exponent. But that's just the feeling, I might be wrong.

The step invariant transform is nice for control signals, but then you are usually not having very complicated structures there and they can be handled manually. Since TPT is pretty much equivalent to the trapezoidal integration of the state-space equations, this integration is the way to apply TPT for the step invariant transform.

Post

Smashed Transistors wrote:...
Maybe there is some substitution/variable change à la Fontana that can provide a sort of topology preserving step invariant transform ?
Check out backwards Euler, which does preserve the topology and has an identical time domain shape to the impulse invariant transform when applied to a one pole low pass decay from a normalised voltage of 1 using the correct time constant mapping. Backwards Euler is widely used in circuit simulation of stiff circuits, but it introduces a lot of damping, so you have to oversample quite a lot to get decent phase response and so correct resonance behaviour for filters.

edit: I just noticed you were talking about the step invariant transform not the impulse invariant, but hopefully this is still useful to know.
The Glue, The Drop - www.cytomic.com

Post

Z1202 wrote:Since TPT is pretty much equivalent to the trapezoidal integration of the state-space equations, this integration is the way to apply TPT for the step invariant transform.
I meant integrating the state-space equations (analytically) for a step input.

Post

Just a side-note on Chapter 6 Phasers. You recommend using 2-pole allpasses but almost all phaser schematics I've looked at so far are odd order feedback systems, if it's a 4-stage feedback is from the last to the second, ditto for most other even order phasers, so even though a 2nd order allpass will work, in a VA context I don't think it is optimal(but nothing wrong with it in a V context). Feeding back an odd stage creates either a lowpass or highpass in the feedback response, depending on polarity, so a typical phaser will not look like any of your diagrams.

P.S. Chapter 7 is insane, makes my head hurt. Great book!

Post

Ichad.c wrote:Just a side-note on Chapter 6 Phasers. You recommend using 2-pole allpasses but almost all phaser schematics I've looked at so far are odd order feedback systems, if it's a 4-stage feedback is from the last to the second, ditto for most other even order phasers, so even though a 2nd order allpass will work, in a VA context I don't think it is optimal(but nothing wrong with it in a V context). Feeding back an odd stage creates either a lowpass or highpass in the feedback response, depending on polarity, so a typical phaser will not look like any of your diagrams.

P.S. Chapter 7 is insane, makes my head hurt. Great book!
Our mileage clearly differs in regards to phaser schematics, although I wouldn't claim I've seen them all. As for unusual (or maybe it's usual in your book ;) ) feedback over an odd stage count, I'm not sure that'll immediately qualify as a lowpass or a highpass behavior for the entire phaser, since it affects feedback only.

It's also not that I recommend using 2-poles. It's only that 2-poles allow you to control the notch spacing in ways that 1-pole do not, if you don't need this feature, probably no reason to use 2-poles at all.

Edit: actually, by using multimode 2-poles (multiinput or multioutput) you can enhance them with a "1-pole" output or a "1-pole" input (given they don't resonate), thus being able to construct feedback over an odd stage count.

Edit2: using TSK 2-pole allpasses gives you the option to pick up or inject the signal directly between two 1-poles. The behavior in the resonating case, or for "anti-resonance" will probably differ from multimodes SVFs.

BTW I've got a different derivation of the TSK filters now. They are much easier representable as a 2-pole bandpass (1-pole lp + 1-pole hp) with feedback, where the second of the filters in the chain is multimode and the "other" output is used as the whole filter's output. That is, pick the highpass output for the feedback path, but the lowpass output for the main output signal, thus obtaining a 2-pole LP. By transposing the structure you get a multi-input filter in the beginning of the chain. Respectively, the Sallen-Key architecture is the same bandpass (1-pole hp + 1-pole lp) feedback structure with the main input going into the lowpass input of the first filter. I have updated the book text internally, but still didn't find time to polish a few other areas before I can release the next update.

Post

Z1202 wrote:
Ichad.c wrote:Just a side-note on Chapter 6 Phasers. You recommend using 2-pole allpasses but almost all phaser schematics I've looked at so far are odd order feedback systems, if it's a 4-stage feedback is from the last to the second, ditto for most other even order phasers, so even though a 2nd order allpass will work, in a VA context I don't think it is optimal(but nothing wrong with it in a V context). Feeding back an odd stage creates either a lowpass or highpass in the feedback response, depending on polarity, so a typical phaser will not look like any of your diagrams.

P.S. Chapter 7 is insane, makes my head hurt. Great book!
Our mileage clearly differs in regards to phaser schematics, although I wouldn't claim I've seen them all. As for unusual (or maybe it's usual in your book ;) ) feedback over an odd stage count, I'm not sure that'll immediately qualify as a lowpass or a highpass behavior for the entire phaser, since it affects feedback only.

It's also not that I recommend using 2-poles. It's only that 2-poles allow you to control the notch spacing in ways that 1-pole do not, if you don't need this feature, probably no reason to use 2-poles at all.
I haven't gone through every phaser schematic, just the usual 20 or so commonly used ones, some don't even have feedback at all, and there is even some that I don't understand at all like the ADA Phaser, there probably are exceptions! I'm not sure why analog designers mostly use odd order feedback except maybe price, 'cause you'll need a extra inverter usually. Some are even explicitly odd order phasers like the orange Ross phaser that is 5 allpass stages, but it takes the output from the 4th stage. That being said, I actually think even orders sound good too, so nothing stops us from doing so. About how even/odd feedback changes the *feedback* response, I probably chose poor words to describe it, a picture is worth a thousand words -> scroll down and look at the low frequencies' level on the left hand side of the 1st notch in the picture "R28 feedback to 2nd stage(Default)"

http://www.electrosmash.com/mxr-phase90
Z1202 wrote: Edit: actually, by using multimode 2-poles (multiinput or multioutput) you can enhance them with a "1-pole" output or a "1-pole" input (given they don't resonate), thus being able to construct feedback over an odd stage count.
Yeah, I tried this(SVF), quite fun to play with. Even built a 4-stage phaser with pole-mixing a buffered ladder filter(not so great btw).

Z1202 wrote: Edit2: using TSK 2-pole allpasses gives you the option to pick up or inject the signal directly between two 1-poles. The behavior in the resonating case, or for "anti-resonance" will probably differ from multimodes SVFs.
That is a brilliant idea! That would make it pretty versatile.

Post

Today I have made another late night scan though this bible err the art of VA filter design book, and while I am far away from fully understanding it, I'll dare to make a few comments that -I hope- might be useful for a future revision.

1) During the discussion of nonlinearities it would be better to introduce how these are usually solved using numerical analysis methods first before introducing more efficient analytical solutions (I know this is not a numerical analysis book, but it is also obvious that there are a number of synths in the market that actually use numerical methods).

2) In the discussion of the trapezoidal integration and BLT, it would be better if a completely separate "classical DSP" usage of z-transform and BLT was added as an appendix or as a preceding chapter, so that the two approaches could be clearly contrasted. You know, one begins with an analog prototype which is a ready to use filter design and is a compromise between certain conflicting requirements, makes a substituion from s to z which is just a mapping from a "classical DSP" perspective and then extracts parameters for a recursive filter from the transfer function H(z) and while the resulting filter may have serious numerical errors, the design job finishes at that point. At the moment, there is a discussion of BLT in the same chapter, but the fact that it is in the same chapter, overlapping with the discussion of VA filters is confusing unless one is informed about the typical usage of BLT in more classical point of view. I can see that there is a need for the overlapping discussion, but there is a "pedagogical" problem here, so to speak, and the problem is not solved by simply stating that 'the discussion of the basic concepts of discrete-time signal representation and processing' is outside the scope of the book, because the problem is not at such a basic level as there is a whole academical literature that completely ignores the methods mentioned in the book. Their perspective needs to be made explicit in a separate chapter, their methods need to be exposed and their limitations need to be shown, and a whole new chapter with solution proposals should begin, and if the discussion of BLT needs to be duplicated and if yet another introduction to the z-transform with a different formulation will be required, so be it. Combining the two creates a too condensed presentation.

3) For the sake of completeness a discussion of eigenfunctions, eigenvalues and integral transforms seems to be necessary and while this is briefly implied during the formula derivations in the second chapter, unfortunately such a discussion would not fit to the overall discourse of the text, perhaps that's why it was omitted.

There were also a few minor typos here and there (such as having a j inside the cosine function formula in fourier series discussion (eq. 1.1 in page 2), and an exchange of the words numerator/denumerator during the discussion of zeros and poles (p.18, paragraph starting with "Recall that..." ) but I have realized that their correct forms were obvious to those who know, and weren't confusing to them.

If the intended audience for the book is DSP experts only, probably no changes are required as these points are probably obvious to them, otherwise it would be better if they were addressed in a future revision. Not that the "uninitiated" will ever fully understand the VA filter bible, but you know, it does not have to be as cryptic as the book of revelation :-)

Finally thanks for writing such a great book, it takes time to understand and appreciate. Self initiation ceremony is a long one that is never complete.
~stratum~

Post Reply

Return to “DSP and Plugin Development”