Book: The Art of VA Filter Design 2.1.2

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

hibrasil wrote: I tried the tanhs but this seemed to kill the resonance... any ideas?
Scale it, like: a = input/4; out = tanh(a)*4;

Probably better to work the scaling into the non-linear equation - if you're approximating it.

Inanyway - thanks Robin! - especially for the low/hi shelf!

Andrew

Post

hibrasil wrote:wow, thanks robin, that is really nice of you. Do you still suggest oversampling this filter? I get great performance from it without any oversampling.
you will notice the typical frequency-warping artifacts inherent in the bilinear transform design - which, in this case, takes place in the form of the bilinear (i.e. trapezoidal) integrators. if this bothers you, then oversampling would be advisable, otherwise it shouldn't be necessary. in particular, there shouldn't be any instability issues towards higher cutoff frequencies as in the non-ZDF designs.

Is this design based on the "Simper" SVF?, the response definitely looks similar to a version of it that somebody posted on the max msp forum.
it's based on the simpLer SVF that i posted earlier in this thread.
I tried the tanhs but this seemed to kill the resonance... any ideas?
this is really just a very ad-hoc way of introducing some saturating behavior. IIRC, saturating the states was suggested somewhere here in the forum as well. so far, i didn't explore nonlinearities in any depth. i'd be happy, if someone else could fill this gap.
ps... what license is the latest code you posted under?
consider it public domain. if you do something significant with it, some credits would be nice but i don't insist on that. credits should go mainly to Vadim anyway, since the lions share of the research work is in his book and i just implemented it and filled some gaps.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Robin from www.rs-met.com wrote:consider it public domain. if you do something significant with it, some credits would be nice but i don't insist on that.
Thank you very much for sharing! :) I have ported your code to REAPER's JS, and included it in my simple mono synth project (credits included, of course).

BTW, while comparing your ZDF-SVF with the RBJ cookbook filters, I figured out the following RBJ cookbook compatible alternate settings:

Code: Select all

// RBJ cookbook BPF (constant 0 dB peak gain)
R2 = 1 / G;
cL = 0; cB = R2; cH = 0;

// RBJ cookbook notch
R2 = 1 / G;
cL = 1; cB = 0; cH = 1;

// RBJ cookbook APF
R2 = 1 / G;
cL = 1; cB = -R2; cH = 1;

Post

Tale wrote:
Robin from www.rs-met.com wrote:consider it public domain. if you do something significant with it, some credits would be nice but i don't insist on that.
Thank you very much for sharing! :) I have ported your code to REAPER's JS, and included it in my simple mono synth project (credits included, of course).
cool - now i can finally listen to this filter instead of just staring at plots :hihi:

BTW, while comparing your ZDF-SVF with the RBJ cookbook filters, I figured out the following RBJ cookbook compatible alternate settings...
errmmm - you are using the G parameter there? actually this is supposed to be relevant only for filter types that specify some gain at the cutoff frequency and it is actually unrelated to the bandwidth (B) parameter. both are user parameters that are supposed to be set independently (one or the other may be ignored by certain filter types, some types use both).
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Robin from www.rs-met.com wrote:errmmm - you are using the G parameter there? actually this is supposed to be relevant only for filter types that specify some gain at the cutoff frequency and it is actually unrelated to the bandwidth (B) parameter. both are user parameters that are supposed to be set independently (one or the other may be ignored by certain filter types, some types use both).
No, in my ported version I am not using the G parameter for band-pass constant peak, band-stop and all-pass, because I guess your parameters make more sense (I don't know, but I'm just following your lead here). However, RBJ cookbook does use Q for these filters, so I just thought anyone wanting to 1:1 port there code over might want to use them this way.

From http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt:
BPF: H(s) = (s/Q) / (s^2 + s/Q + 1) (constant 0 dB peak gain)
notch: H(s) = (s^2 + 1) / (s^2 + s/Q + 1)
APF: H(s) = (s^2 - s/Q + 1) / (s^2 + s/Q + 1)

Post

Tale wrote:No, in my ported version I am not using the G parameter for band-pass constant peak, band-stop and all-pass, because I guess your parameters make more sense (I don't know, but I'm just following your lead here). However, RBJ cookbook does use Q for these filters, so I just thought anyone wanting to 1:1 port there code over might want to use them this way.
ah - OK - i see. yes, i use a somewhat different parametrization than RBJ because i find "Q" parameters non-intuitive.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

If your a cheapscate like me:

Code: Select all

res = 1.f - r;

Allpass = lp - (bp * (1.0 - (r))) + hp;
Just my 2 cents.

Post

For the less technically inclined, Will Pirkle made a video project demonstrating different filters, including the technique in this paper.

http://www.willpirkle.com/project-galle ... notes/#AN4

Included on the site is the source for the project shown in the video

Post

There is now also a "quick-and-dirty" A5 version of the book, primarily thought for the e-book readers:
http://www.native-instruments.com/filea ... 0.3_A5.pdf

Regards,
{Z}

Post

I noticed there was some discussion about state-space formulations that I missed earlier.. so given (I converted vectors to lower-case):
Wolfen666 wrote: dX/dt = Ax + Bu
y = Cx + Du

with X the state variables, U the input, Y the output, and A, B, C, D various matrixes.
I really hope I get this right; it all works fine in practice, but I might have made mistakes describing it here...

You can convert that directly in matrix form.. eg for euler:

x[n] = x[n-1] + A*x[n] + B*u[n]

which leads to:

(I-A)*x[n] = x[n-1] + B*u[n]

and in theory:

x[n] = (I-A)^-1 * (x[n-1] + B*u[n])

Realistically you'd solve directly or LU decompose or something similar (inverting matrices is rarely the right thing to do). Trapezoidal works basically the same (with BLT tan(w/2) scaling for A and B), except you'd solve for a temporary

v[n] = x[n-1] + A*v[n] + B*u[n]

and then perform the update separately:

x[n] = x[n-1] + 2*(A*v[n] + B*u[n])
or equivalently
x[n] = v[n] + (A*v[n] + B*u[n])
or whatever other scheme that happens to give the same results.

Now for the point of further theoretical exercise, let's consider the following definition:

i[n] = 2*(A*v[n] + B*u[n])

Then trapezoidal rule can be written as:

v[n] = v[n-1] + (i[n-1] + i[n])/2

We could certainly store v[n] and i[n] directly, which would give use the DF1 scheme. Since during the next time-step we only need the sum, we can define:

x[n] = v[n] + i[n]/2

This gives us the above TDF2 scheme, which is still the good old trapezoidal rule with a simple memory optimization. Observe that this does not make any assumptions about i[n] here, so the optimization is always valid (even in the non-linear or time-varying case). Also observe that DF1 is numerically more accurate (at ridiculously low frequencies) if and only if you either perform the sum with extended internal precision (eg 80-bits in x87) or you order the DF1 summation as v[n]+(i[n]/2+i[n-1]/2) with significant parenthesis (if you sum the DF1 in the TDF2 order and your internal precision matches what you store.. then you get bit-wise identical numerical results either way.. as if it would make any difference after a matrix equation solver..).

Bonus points: If you want to do circuit-modelling, you can also observe that an MNA matrix works pretty much the same as the (I-A) matrix here.. pretty much all you need to do is replace x[n-1] with a vector of equivalent currents and do the update pass from the v[n] vector and then you can observe that you can iterate the solution multiple times per time-step (and update once after) to iterate the currents and current derivatives of any non-linear components and then then you're not that far from a Spice-like system, where you can start troubleshooting convergence issues..

Obviously using a brute-force matrix-equation solver isn't necessarily very real-time friendly, but it might still be helpful from a theoretical point of view.
2) We can also use the state space representation to get the equivalent topology preserving structures (I still don't get it now, but I'm going to study that)

3) Or, we can apply directly standard numerical integration methods like Runge-Kutta methods, or implicit Euler, implicit trapezoidal (bilinear), Gear / BDF2 etc. to the scheme.
I think the above method qualifies as doing both of these. Point 3 because that's what I'm doing (applying a numerical integration method directly to the state-space) and point 2 because you can simply interpret the matrices as notation for describing the topology we want to preserve (or the final structures we end up with.. which is kinda the point of preserving topology).

Post

I didn't try to follow mystran's post's maths in detail, however, as for the conversion of the state-space systems, this can be done in exactly the same way as for the 1-pole RC filter in the book. That is,

- rewrite the differential state-space equation as an integral one (in the matrix/vector form)
- the same can be seen as a set of one-dimensional integral equations
- each of those can be represented by a path in the block diagram containing an integrator (the paths will have shared segments of course)
- replace the integrators by their discrete-time models.
- done

Regards,
{Z}

Post

Just a message to tell that I have worked a lot these last weeks on TPT stuff, mainly on Will Pirkle's articles that you can find on his website , and also on an article from Aaron Wishnick written for the DAFX 14

http://www.willpirkle.com/app-notes/
http://www.dafx14.fau.de/papers/dafx14_ ... s_for_.pdf

In short, Will Pirkle has presented interesting ways to model analog filters, thanks to Vadim's work, but also using the electronic superposition theorem, and a method from Aki Härmä that he improved. Moreover, Aaron Wishnick gave something I was looking for some time ago, the way to take any biquad in the Laplace domain, and simulate it by tweaking the SVF TPT structure coefficients. cLP, cBP and cHP still need to be smoothed manually however in the time-varying case.

Post

Wolfen666 wrote:Just a message to tell that I have worked a lot these last weeks on TPT stuff, mainly on Will Pirkle's articles that you can find on his website , and also on an article from Aaron Wishnick written for the DAFX 14

http://www.willpirkle.com/app-notes/
http://www.dafx14.fau.de/papers/dafx14_ ... s_for_.pdf

In short, Will Pirkle has presented interesting ways to model analog filters, thanks to Vadim's work, but also using the electronic superposition theorem, and a method from Aki Härmä that he improved. Moreover, Aaron Wishnick gave something I was looking for some time ago, the way to take any biquad in the Laplace domain, and simulate it by tweaking the SVF TPT structure coefficients. cLP, cBP and cHP still need to be smoothed manually however in the time-varying case.
Thanks for pointing out Aaron's paper, it is good that he has done all the fiddly maths and shown the stability in a rigorous way. I've already implemented all this many years ago in Ableton's EQ8 filters, and the feedback from users has been very positive.

I solved the transform from regular biquad to Trapezoidal SVF coefficients some time ago as an aside to other work I was doing. There are other aspects to the paper you may also find interesting, which is using sin(w) and sin(2*w) instead of tan(w) in the coefficient calculation:
http://www.cytomic.com/files/dsp/SvfLin ... dalSin.pdf

A word of warning as well: slapping arbitrary nonlinearities to the state variables of implicit filters is not a good idea. I have done extensive modelling of the MS20 Sallen Key v1 (Korg35) and v2 (LM13600) filters and have found Will Prickle's work not to be very useful, there are much better ways to solve these non-linear filters properly while maintaining trapezoidal stability. If you want to have a listen to the results check out my plugin The Drop, the MD filters use this "semi-implicit" method, the HD filters are fully implicit trapezoidal so are smoother at high cutoffs and with large amounts of drive, but the MD filters only require one step to solve so are more efficient. I let my users make the choice as to which type to use in any situation. I will publish a paper on this method in the future to share it with the audio community.

A quote from Aaron's paper, which uses the K-Method to solve the Trapezoidal SVF:

"These difference equations are actually implementable with
some extra computation. The K-Method [10] [15] is an algebraic
method for discretizing and solving systems in state space form,
and Zavalishin [16] presents a graphical method that is equivalent,
which is also applied to the SVF."
The Glue, The Drop - www.cytomic.com

Post

Thanks for pointing out your paper about SVFs, I will have a look on it too ;)

About Will Pirkle's method, of course the nonlinearity has been inserted in the block scheme the "naive" way, without trying to solve it properly with the equations, so any method involving actual solving might be better, such as mystran's or david yeh's ones that I know very well. However, I thought that his approach is quite interesting, with the use of the superposition theorem, and the simulation of filters with the sum of their orders being superior to the transfer function filter order, so that the digital block scheme follows a little closer the analog block scheme.

Post

Aaron Wishnick has also provided some source code which may be used to "try" the different time-varying friendly structures for simulating IIR filters he has described in his article. It is available there :

https://github.com/iZotope/time_varying ... filter.cpp

However, I have not been able to get something from the "tdf2_rabenstein_czarnach" function, a mistake may be somewhere on his code. Is there someone here who knows anything about the stabilization method, described in the article "Stability of recursive time-varying digital filters by state vector transformation" from R. Rabenstein and R. Czarnach ?

Thanks in advance :wink:

Post Reply

Return to “DSP and Plugin Development”