You are probably already using Zero Delay Feedback filters, so let your customers know!

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Wolfen666 wrote:Here is my take :
...
So, in short, any first order filter modelled from the continuous domain to the digital domain can be made with or without this Z^-1 on the feedback. Using the bilinear transform, or the TPT techniques are like doing the inversion using an implicit integration method, and so without using an additional Z^-1 in the feedback loop to simplify the equation. So we can say that any biquad from the RBJ book can be seen as a "zero delay feedback filter", even if you need obviously Z^-1 in the digital implementation, which are not there to simplify the delay-free loops, but to write the discretization scheme.
Indeed, I couldn't agree more. Thanks for your excellent summary.

Wolfen666 wrote: Then, doing that stuff with an additional Z^-1 in the feedback would be so dumb for accuracy it is not worth mentioning... So, it makes more sense for me to talk about that when nonlinear blocks are used.
Possibly you may prefer forward euler to avoid a division if your cutoff frequency is low for audio rate modulation, but yes I agree, even one division is pretty cheap these days even if it is once per sample. It is the non-linear case where things get more cpu intensive to solve.

Wolfen666 wrote: And talking about "zero delay filters" is just nonsense (follow my look :hihi: )
This is just part of the confusion. The full term is meant to be "zero delay feedback filters" but it is very easy to get the feedback and filter confused when TLA's are bandied about.
Wolfen666 wrote: Tell me if I said too much stupid things :wink:
Not a single stupid thing in sight from my point of view, thankyou for communicating your take on all this for everyone.
The Glue, The Drop - www.cytomic.com

Post

Thanks for your nice comments andy :wink:
You can also solve an entire filter block in one go using Newton iteration, or you can process them one by one as Antti did back then in his well known paper about the Moog filter. I don't see any reason to limit the analysis to the feedback case, if you use Newton it doesn't even matter whether there's feedback or not.

Richard
My point is just that marketing and sometimes developers too are too much focusing on a single aspect of the things, which has a kind of a correlation with realism but not that much with what we can like or not in a filter sound, being a taste affair. Even if it remains something very important in the algorithm part, I can't say the contrary, it is only one of the several aspects which are important on it, like the CPU consumption, the component models chosen, the iterations between the model and the original and the right choice of the implicit equation resolution method. I can say that also because I'm using Newton's method for a long time, and I have read this talk about zero-delay feedback filters not a long time ago.

Moreover, it's difficult not to be impressed by Diva or andy's work, which are not just synths + 0df filters, but also a lot of things, what I said before about optimization, models etc., and passion on the other side. I can tell when I try the products and when I see the exchange between the developers and their customers, or the way they think all day long about how to improve this aspect or this one of the technology. I could say the same thing in another domain for Valhalla's developer for example :hihi:

So this 0df filter discussion is kind of sad, with all this "pathos", not being really the thing that matters the most and not being the thing the most impressive/useful in the technology/products discussed here, only a part of it. And to be honest, I don't have a lot of interest in the delay free loop resolution discussion nowadays, since it has never been a problem in what I do, far away from synth filters, but I have read with a lot of attention the time-varying aspect of things, seen in TPTs and in the general discussions about topologies properties.

And I think there is sometimes a confusion here between the terms "zero-delay feedback filters" and "time preserving topologies/transforms". For me, they are two very different things, since a digital filter implemented with the DF1 topology is a 0dff but is not a TP topology, isn't it ?

Anyway, I would be happy if I can say a few constructive things in this talk, since I have learnt a lot from Urs, Vadim, andy, mystran and other people contributions here :wink:

Post

Wolfen666 wrote:Thanks for your nice comments andy :wink:
Its nice to have someone that can see how this all fits into a broader framework and can see how "feedback" is such an odd distinction to make.
Wolfen666 wrote: My point is just that marketing and sometimes developers too are too much focusing on a single aspect of the things, which has a kind of a correlation with realism but not that much with what we can like or not in a filter sound, being a taste affair. Even if it remains something very important in the algorithm part, I can't say the contrary, it is only one of the several aspects which are important on it, like the CPU consumption, the component models chosen, the iterations between the model and the original and the right choice of the implicit equation resolution method. I can say that also because I'm using Newton's method for a long time, and I have read this talk about zero-delay feedback filters not a long time ago.
LOL. Yes, Newtons method has been around for quite a while. I have been using it for a long time too as have many developers. long before all this zero-delay feedback filters stuff as well. All these topics of preserving the topology of a circuit is generally called "circuit simulation", and it is well developed area that has been around since the birth of circuits, and the maths covering how to solve these systems of non-linear equations even longer.

I think developers get excited when they sort out a new (for them) bit of maths and naturally want to share this excitement with the world. It is easy to invent acronyms and catch phrases for these "new" solutions and these get used in marketing and here we are a plethora of TLAs describing long established maths in different and obscuring ways. This kind of thing will continue and is not unique to music dsp.
Wolfen666 wrote: And I think there is sometimes a confusion here between the terms "zero-delay feedback filters" and "time preserving topologies/transforms". For me, they are two very different things, since a digital filter implemented with the DF1 topology is a 0dff but is not a TP topology, isn't it ?
Yes I think this is part of the problem with the terminology, people sometimes think of the two as interchangeable. A forward euler (explicit) filter like a chamberlin svf is topology preserving but it is not a "zero delay feedback filter". A DF1 biquad based on the laplace transform of an svf is not topology preserving, but it is a "zero-delay feedback" filter. Then you have filters that are both topology preserving and use trapezoidal integration (implicit) to solve.

I have even solved an in-between explicit and implicit svf and sallen key filter, where it has the same frequency response of a trapezoidal integrated one and is stable over the entire frequency and resonance range, but is an explicit method, so the non-linearities can all be applied trivially and solved for in one step. This really blurs the line between explicit and implicit methods, offering the advantages of implicit frequency responses and stability with the low cpu of explicit methods. When I started out in dsp I viewed this sort of method was the "holy grail" I was always looking for, I've since moved on.
Wolfen666 wrote: Anyway, I would be happy if I can say a few constructive things in this talk, since I have learnt a lot from Urs, Vadim, andy, mystran and other people contributions here :wink:
Thanks to you as well for contributing!
The Glue, The Drop - www.cytomic.com

Post

I think that using topology preserving methods target a different audience than the pure mathematical formulas.

For example, when I was to implement a 4-pole SVF, I stumbled upon the Linkwitz-Riley Crossover filters. (Especially this paper.)

There's a schematic on page 4 which shows the basic structure of the filter.

It's pretty easy to see that you have 4 integrators and 4 feedback paths. Using the topology preserving method I modelled the filter just as it works in the schematics, solved the different feedback paths and it worked.

For me, this is a much more 'natural' way for converting analogue filters to digital ones and it feels more like simplified circuit simulation. Still the results are great and you have pretty stable and useful stuff without special tweaking or oversampling. The 2nd order SVF build as a crossover is stable over the whole fs/2 range.

And, in the end, it's all a matter of taste. IMHO the filters I built using TPT are the best I've ever come up with and have a beautiful resonance behaviour. I don't think that I'll ever switch back to DF filters again ... which is also no longer necessary, because I can just e.g. stick the coeffs of a 4th order transfer function into the 4Pole L-R x-over and am done.

... it just feels more like working with 'perfect' analogue components/building blocks instead of math ... and I like this :D
... when time becomes a loop ...
---
Intel i7 3770k @3.5GHz, 16GB RAM, Windows 7 / Ubuntu 16.04, Cubase Artist, Reaktor 6, Superior Drummer 3, M-Audio Audiophile 2496, Akai MPK-249, Roland TD-11KV+

Post

neotec wrote:I think that using topology preserving methods target a different audience than the pure mathematical formulas.

For example, when I was to implement a 4-pole SVF, I stumbled upon the Linkwitz-Riley Crossover filters. (Especially this paper.)

There's a schematic on page 4 which shows the basic structure of the filter.

It's pretty easy to see that you have 4 integrators and 4 feedback paths. Using the topology preserving method I modelled the filter just as it works in the schematics, solved the different feedback paths and it worked.

For me, this is a much more 'natural' way for converting analogue filters to digital ones and it feels more like simplified circuit simulation. Still the results are great and you have pretty stable and useful stuff without special tweaking or oversampling. The 2nd order SVF build as a crossover is stable over the whole fs/2 range.

And, in the end, it's all a matter of taste. IMHO the filters I built using TPT are the best I've ever come up with and have a beautiful resonance behaviour. I don't think that I'll ever switch back to DF filters again ... which is also no longer necessary, because I can just e.g. stick the coeffs of a 4th order transfer function into the 4Pole L-R x-over and am done.

... it just feels more like working with 'perfect' analogue components/building blocks instead of math ... and I like this :D
Yes you are right, I feel the same way, directly integrating the original circuit while preserving the topology is what you are talking about and this is exactly what circuit simulators do, but they also handle non-linearities if you want. These methods are are not new. Please read this document and you can see how Vadim's TPT is just a special case of nodal analysis using trapezoidal integration:

www.cytomic.com/files/dsp/OnePoleLowPass.pdf

With regular circuit maths you can specify the voltage over the capacitor as a difference between voltage at either end of the capacitor, or as a battery voltage that you add to the voltage at one side of the capacitor to get the voltage at the other side. From what I can see TPT is like state space analysis where you are limited to using the latter specification of capacitor voltage.

PS: in addition to a 4 pole SVF there is also what is also what is called a "leapfrog" structure, where you can also get simultaneous 4 pole low pass and 4 pole high pass for a crossover.
The Glue, The Drop - www.cytomic.com

Post

neotec wrote:Still the results are great and you have pretty stable and useful stuff without special tweaking or oversampling.
Probably , you want to oversample it anyway , because of frequecy response wraping.It makes sense at low sample rates. I can show responses of oversampled and non oversampled filters if needed :wink:

Post

2DaT wrote: Probably , you want to oversample it anyway , because of frequecy response wraping.It makes sense at low sample rates. I can show responses of oversampled and non oversampled filters if needed :wink:
Agreed, with non-linear elements - you have to oversample inanyway. The interisting things(and CPU consumption) are with the non-linear elements - and processing these are the fun part - yet less talked about(?). Sofar there is the super simple but not that nice 'distort the state(s)' and mystran's nice maxima explenation of using an Euler solver, is there any other non-iterative solvers to look into?

Post

Ichad.c wrote: and mystran's nice maxima explenation of using an Euler solver
Where? :hyper:

Btw providing nice visuals of frequency responses.
44khz ,saw osc, 2 pole filter model ( for 4 pole difference is even bigger).
Non oversampled:
Image
Oversampled:Image

Notice how resonance got fat... literally :D
Those frequencies from 12 khz to 16 khz also get nice amplification , quite hearable for me.

Post

Ichad.c wrote:
2DaT wrote: Probably , you want to oversample it anyway , because of frequecy response wraping.It makes sense at low sample rates. I can show responses of oversampled and non oversampled filters if needed :wink:
Agreed, with non-linear elements - you have to oversample inanyway. The interisting things(and CPU consumption) are with the non-linear elements - and processing these are the fun part - yet less talked about(?). Sofar there is the super simple but not that nice 'distort the state(s)' and mystran's nice maxima explenation of using an Euler solver, is there any other non-iterative solvers to look into?
There are two types of Euler solvers, implicit (backwards euler) and explicit (forward euler).

As I said previously it is possible to use explicit methods and still get a trapezoidal frequency response and full frequency range stability, so you can add non-linearities without needing an iterative solver. As soon as you use an implicit integration method and have non-linearities on the terms you are solving for you will require an iterative root finding method, this is unavoidable. It is possible to pre-solve some limited cases and use mult-dimentional table lookups.

If you mean by "Mystran's nice maxima explanation using an Euler solver" the methods outlined in this thread:

http://www.kvraudio.com/forum/viewtopic.php?t=349859

then this isn't Euler (forward or backward), it is trapezoidal, and it isn't a one step method, it is an iterative solver but he only does the first step. This means it isn't very accurate, especially if you increase the drive, it becomes numerically unstable. He has contributed a very important idea: using f(x)/x instead of f'(x) as the derivative, which is handy to prevent crazy intermediate values as you converge on a solution, but it does slow down convergence, there are always tradeoffs. Otherwise it is the same nodal analysis + implicit integration that is typically used in circuit simulators and has been done for a very long time. I suggest you read the qucs technical papers for a thorough summary of these methods:

http://qucs.sourceforge.net/tech/technical.html
The Glue, The Drop - www.cytomic.com

Post

andy-cytomic wrote:This means it isn't very accurate, especially if you increase the drive, it becomes numerically unstable. He has contributed a very important idea: using f(x)/x instead of f'(x) as the derivative, which is handy to prevent crazy intermediate values as you converge on a solution, but it does slow down convergence, there are always tradeoffs.
Well, it's important to realize that "slow convergence" doesn't just mean "a few more iterations" but a lower order too. As far as I can tell it's no better than linear, so for any sort of accuracy you need such impractical iteration counts that it doesn't make any sense. [edit: I do admit I didn't fully understand the implications back then]

Basically, the moment you're willing to do an average of 2 or 3 iterations, it's probably time to switch to another scheme. But that's the thing: I picked it as best-guess line-fit for the situations where "absolutely no iteration" was an explicit design choice. :)

On the other hand, it's trivial to replace that scheme with a simple Newton-scheme (or secant or whatever): just replace the slopes with derivatives, add the constant f(x) terms and put it all inside an iteration loop. So the same basic approach still works. Another thing to consider, should I decide to write an updated version of that old example code [should make it more explicit how to do this].
Otherwise it is the same nodal analysis + implicit integration that is typically used in circuit simulators and has been done for a very long time.
Well, in the thread you mentioned (at least not in the original post), I don't remember doing any nodal analysis, because I just pretty much assumed a bunch of differential equations as the ground truth. The point was to show how to turn something simple into something equally simple that sort-of works, without worrying any circuit-modeling as such (partially because at the time I was still trying to figure out the best way to do it).

From my point of view, the whole MNA part is a separate step. It solves the problem of getting from the circuit components and connections (the "netlist") to the equations. The nice thing about MNA is that it's very straight-forward to do automatically. But that just gets you an ODE (or something close enough). Then you need to solve it using some method and at that point it doesn't really matter if you used MNA or some other alternative for the first step.

Post

mystran wrote:Well, in the thread you mentioned (at least not in the original post), I don't remember doing any nodal analysis, because I just pretty much assumed a bunch of differential equations as the ground truth.
Ok so where did the differential equations come from?

Perhaps you didn't do any circuit analysis, but if this is for the Moog, then all that was done for you by Antti. He applied Kurchoff's laws to the circuit to make the differential equations

Code: Select all

http://en.wikipedia.org/wiki/Kirchhoff's_circuit_laws
(the ' kills kvr's url handling). People can choose to apply either the voltage or current law depending on the situation, since they are smart. Automated systems just use nodal analysis since it's easier to apply in a systematic way. So please insert "Kirchoff's laws" where ever I say "nodal analysis" if you are doing things by hand, sorry for being lazy with the terminology.

I'm trying to point people in the direction of being able to solve circuits themselves from the schematic, and to do that working out the differential equations is optional, you can just directly integrate it if you want, there isn't any difference in the end.
The Glue, The Drop - www.cytomic.com

Post

mystran wrote:On the other hand, it's trivial to replace that scheme with a simple Newton-scheme (or secant or whatever): just replace the slopes with derivatives, add the constant f(x) terms and put it all inside an iteration loop. So the same basic approach still works. Another thing to consider, should I decide to write an updated version of that old example code [should make it more explicit how to do this].
This is the method I have been describing all along and have provided numerous references and urls to, it is how non-linear circuits are solved. Yes it is trivial to implement, but it is not trivial to get it to always converge and with a low number of iterations, especially if you can't go changing the step size of the fly.
The Glue, The Drop - www.cytomic.com

Post

[OT] I just spotted this: http://www.signaldust.com/p-ineq.html , congrats Mystran on your first commercial release, this is brilliant news!!!! There is also a kvr thread about it: http://www.kvraudio.com/forum/viewtopic.php?p=5570038 , sorry for the off topic but I just had to share my excitement here with Mystran having just posted, as you can read from his posts he clearly knows his stuff, so this is wonderful news.
The Glue, The Drop - www.cytomic.com

Post

andy-cytomic wrote:
mystran wrote:Well, in the thread you mentioned (at least not in the original post), I don't remember doing any nodal analysis, because I just pretty much assumed a bunch of differential equations as the ground truth.
Ok so where did the differential equations come from?
You're missing the point. Getting from a circuit to code is really a two step problem: first you need a model of the circuit, and then you need a method for numerically solving the model. You can solve each of these sub-problems separately and they are essentially independent as far as particular techniques go.

I (quite explicitly I might add) made no attempt to do anything about the first part in that thread (because the thread wasn't about circuit modeling; it was about zero delay filters), so there was no need to even consider MNA. I picked a bunch of familiar equations as given, and it doesn't really matter how those equations came to be, since the subject was the second part: how to write an implementation that numerically solves those.

I'm making such a fuzz about this, because I think both problems are interesting on their own right... but at the same time you can pretty much "plug and play" any solution for any of the sub-problems and still have it all work just fine.

Post

mystran wrote:
I'm making such a fuzz about this, because I think both problems are interesting on their own right... but at the same time you can pretty much "plug and play" any solution for any of the sub-problems and still have it all work just fine.
Exactly.

Post Reply

Return to “DSP and Plugin Development”