## What is the principle behind zero delay feedback filters?

abique
KVRian

625 posts since 26 May, 2013, from Berlin
Hi,

I wonder what is the principle behind zero delay feedback filters?

Thanks!
matt42
KVRian

996 posts since 9 Jan, 2006
Short answer. The instantaneous feedback can be solved with simple algebra. In the case of a linear (no clipping, saturation etc) filter at least.

I have a very simple (I hope) tutorial on the subject. I've been to lazy/preoccupied to upload it yet. I'll try to do so in a couple of hours when I get the chance.

Was going to do a quick example, but will have to do it later.
gafferuk
KVRist

270 posts since 14 Feb, 2007
abique
KVRian

625 posts since 26 May, 2013, from Berlin
matt42 wrote:Short answer. The instantaneous feedback can be solved with simple algebra. In the case of a linear (no clipping, saturation etc) filter at least.

I have a very simple (I hope) tutorial on the subject. I've been to lazy/preoccupied to upload it yet. I'll try to do so in a couple of hours when I get the chance.

Was going to do a quick example, but will have to do it later.

By using integration?
Thanks a lot, I'm impatient to read the tutorial
matt42
KVRian

996 posts since 9 Jan, 2006
Abique, I'm not sure what you meant by integration. The tutorial makes use of integrators, rather simple digital filters, not integration as in calculus.

I'm going over the tutorial now and fixing it up. I just hope it'll be of some use.
Urs
u-he

22010 posts since 7 Aug, 2002, from Berlin
Here's a nice sounding filter with a unit delay (one sample, "z-1") in the feedback path:

Code: Select all
`class filter{public:   ... stuff ...   float iceq1;   // states in form   float iceq2;   // of current   float iceq3;   // equivalents   float iceq4;   // of capacitors   float y4;      // delayed feedback   float tick( float in )   {      float g = tan( PI * cutoff/samplerate );      float gDiv = 1.f/(1.0 + g);         float x0 = (in - resonance * y4 );         float y1 = (g * tanh( x0 ) + iceq1 ) * gDiv;      float y2 = (g * tanh( y1 ) + iceq2 ) * gDiv;      float y3 = (g * tanh( y2 ) + iceq3 ) * gDiv;      y4 = (g * tanh( y3 ) + iceq4 ) * gDiv;         iceq1 = 2*y1 - iceq1;      iceq2 = 2*y2 - iceq2;      iceq3 = 2*y3 - iceq3;      iceq4 = 2*y4 - iceq4;         return y4;   }};`

It's quite simple. Four stages, and the last stage is fed back to the input during the next iteration.

But how do we remove this delay? - It's not possible to do so by an analytical approach - we can't solve the whole set of equations for y4 because there are a few terms with tanh(). We need to get cooking with a numerical solution.

Really, what it means is, we need to know the output before we calculate it. Impossible? - Nope.

A very, very simple solution is to

- make a filter that we can run without updating the state variables
- that let's us make a "guess" (or "estimate") for the output
- and returns the "error" between guess and resulting output sample
- which we can then refine over a few iterations, with various methods

This may look like this:

Code: Select all
`class filterZDF{public:   ... stuff ...   float iceq1;   // states in form   float iceq2;   // of current   float iceq3;   // equivalents   float iceq4;   // of capacitors   float runFilter( float in, float outEstimate, float &y1, float &y2, float &y3 )   {      float g = tan( PI * cutoff/samplerate );      float gDiv = 1.f/(1.0 + g);         float x0 = (in - resonance * outEstimate );         y1 = (g * tanh( x0 ) + iceq1 ) * gDiv;      y2 = (g * tanh( y1 ) + iceq2 ) * gDiv;      y3 = (g * tanh( y2 ) + iceq3 ) * gDiv;      float y4 = (g * tanh( y3 ) + iceq4 ) * gDiv;            return y4 - outEstimate; // returns the error   }      float tick( float in )   {      float y1, y2, y3;         float EstimateLow = -2.f;      float errorLow = runFilter( in, EstimateLow, y1, y2, y3 );            float EstimateHigh = 2.f;      float errorHigh = runFilter( in, EstimateHigh, y1, y2, y3 );      float outEstimate = 0.f;            while( fabs( errorLow - errorHigh ) > 0.0001f )      {         outEstimate = ( EstimateHigh - EstimateLow ) * 0.5f;               float newError = runFilter( in, outEstimate, y1, y2, y3 );                  if( sign( newError ) == sign( errorLow ) )         {            EstimateLow = outEstimate;            errorLow = newError;         }         else         {            EstimateHigh = outEstimate;            errorHigh = newError;         }      }         iceq1 = 2*y1 - iceq1;      iceq2 = 2*y2 - iceq2;      iceq3 = 2*y3 - iceq3;      iceq4 = 2*outEstimate - iceq4;         return outEstimate;   }};`

Here we're using a very simple method, the http://en.wikipedia.org/wiki/Bisection_method

- first we guess a really low value for y4, -2, our "EstimateLow" resulting in "errorLow"
- then we guess a really high value for y4, +2, our according high Estimate -> errorHigh
- inbetween those guesses is the sought after sample, which yields an error near 0
- then we cut the interval between highest guess and lowest guess in half
- fetch the error for the new guess and make it our next EstimateLow or High, accordingly
- until we find a good enough guess

As you can imagine, the bisection method is very, very slow. We'll need an average of 16 iterations to reach 16 bit accuracy. But, it's good to explain things.

I've omitted a lot of extra code, but d'oh... you can fill in the blanks (and fix whatever bugs) and it'll work.

Enjoy,

- Urs
Urs
u-he

22010 posts since 7 Aug, 2002, from Berlin
For those who want to use this and are in awe of the CPU usage (apart from the most certainly included bugs), here's two quick improvements that make this code almost as fast as Diva:

- don't solve for the output/y4 directly, solve for feedback * y4 instead, but keep track of y4 to update the state. This way it converges much much faster at lower resonance

- use the http://en.wikipedia.org/wiki/False_position_method (much faster, guaranteed to work)

- also, use fabs( error ) < 0.00001 whatsoever to break out of the loop (not for speed up, but for elegance)

Note that the beauty of this solution is that we only (uhm...) need to optimise the error for a single variable. This guarantees us a correct result within the error margin we define. There are other methods which iteratively (often with a single extra step) optimise two to five variables at once. Those to my best knowledge can never be solved within a guaranteed error margin, i.e. if one estimate is too high and another is too low, the output could still converge despite a garbled state.

Cheers,

- Urs
Last edited by Urs on Wed Jun 04, 2014 12:14 am, edited 1 time in total.
matt42
KVRian

996 posts since 9 Jan, 2006
OK, here's a first draft of the first part of the tutorial.

TPT Filters Made Simple

Hope someone finds it interesting/useful. It's quite late here and it's not been proof read, so please let me know about any typos, errors and whatever else might be wrong with it.

Oh yeah, I went with TPT (topology preserving transform). I hope no one gets too hung up on the name it's just the term I'm familiar with and as far as I know there's no universally adopted nomenclature.
abique
KVRian

625 posts since 26 May, 2013, from Berlin
Thanks a lot matt42 and Urs !
mystran
KVRAF

4893 posts since 11 Feb, 2006, from Helsinki, Finland
Urs wrote:Here we're using a very simple method, the http://en.wikipedia.org/wiki/Bisection_method

[...]

As you can imagine, the bisection method is very, very slow. We'll need an average of 16 iterations to reach 16 bit accuracy. But, it's good to explain things.

Indeed, the nice thing about Bisection is simply that it's guaranteed to converge.

Urs wrote:- use the http://en.wikipedia.org/wiki/Secant_method (much faster, guaranteed to work)

Actually it's not guaranteed to work. Unlike Bisection, the secant doesn't bracket the solution between the end-points, so it suffers from the same convergence problems that you can get with Newton's method. Quote from the above Wikipedia page:
Wikipedia wrote:If the initial values are not close enough to the root, then there is no guarantee that the secant method converges. There is no general definition of "close enough" [...]

One can "fix" secant so that it always converges by choosing which one of the two previous values to keep (so the solution remains bracketed), but doing this leads to regula falsi method, which like Bisection, will only give you linear convergence. The above page does however explain an adjustment that it calls the "Illinois algorithm" that will give you super linear convergence (though still worse than secant).

As for multiple variables.. you can just reformulate the problem as optimizing a state-vector and then check for tolerances in each of the dimensions separately.
<- plugins | forum
Richard_Synapse
KVRian

823 posts since 19 Dec, 2010
Urs wrote:For those who want to use this and are in awe of the CPU usage (apart from the most certainly included bugs), here's two quick improvements that make this code almost as fast as Diva:

- don't solve for the output/y4 directly, solve for feedback * y4 instead, but keep track of y4 to update the state. This way it converges much much faster at lower resonance

- use the http://en.wikipedia.org/wiki/Secant_method (much faster, guaranteed to work)

Slightly OT, but have you (or anyone else) dabbled with saturators not continuously differentiable? Does Secant still work then? According to the wiki above it has to be twice differentiable.

Richard
Synapse Audio Software - www.synapse-audio.com
Urs
u-he

22010 posts since 7 Aug, 2002, from Berlin
mystran wrote:Actually it's not guaranteed to work. Unlike Bisection, the secant doesn't bracket the solution between the end-points, so it suffers from the same convergence problems that you can get with Newton's method. Quote from the above Wikipedia page:
Wikipedia wrote:If the initial values are not close enough to the root, then there is no guarantee that the secant method converges. There is no general definition of "close enough" [...]

Well, you can use the secant method just like the bisection method, or maybe I've understood it wrong / used the false term:

- like bisection, create a bracket around the zero crossing, such that you have two points, one above and one below zero
- where bisection cuts the interval in half for a new testing point, we instead take the point where a straight line between our upper and lower points cross zero

I was under the impression that this is the secant method, maybe I added bracketing. This method however will work as the points always come closer to the zero.

It is sometimes slower than bisection though, which is why we have been using Brent's method. I think Diva's main zero delay feedback filter solver is a vectorized version of Brent's, with a humungous piece of code to find the initial bracket and guess.

EDIT: Oopsie, you're right… looks like I mistook "regula falsi" for the secant method

Also edited my optimisation tips to link to http://en.wikipedia.org/wiki/False_position_method
mystran
KVRAF

4893 posts since 11 Feb, 2006, from Helsinki, Finland
Urs wrote:- like bisection, create a bracket around the zero crossing, such that you have two points, one above and one below zero
- where bisection cuts the interval in half for a new testing point, we instead take the point where a straight line between our upper and lower points cross zero

To get the improved convergence for secant, you need to always pick the previous point, whether or not it keeps the solution bracketed. Unfortunately (just like with Newton) this can lead to non-convergence (you probably knew this, but just for reference). Choosing between the two previous points to force the new interval to contain the new solution leads to regula falsi, which is only linear convergence (like bisection; basically, it'll get stuck moving one of the end-points only, no matter how far the other end point might be, which keeps it from converging properly).
<- plugins | forum
Urs
u-he

22010 posts since 7 Aug, 2002, from Berlin
mystran wrote:To get the improved convergence for secant, you need to always pick the previous point, whether or not it keeps the solution bracketed. Unfortunately (just like with Newton) this can lead to non-convergence (you probably knew this, but just for reference). Choosing between the two previous points to force the new interval to contain the new solution leads to regula falsi, which is only linear convergence (like bisection; basically, it'll get stuck moving one of the end-points only, no matter how far the other end point might be, which keeps it from converging properly).

Yep, thanks, I edited my post accordingly.

(our intern will ROTFL when he reads this )
Urs
u-he

22010 posts since 7 Aug, 2002, from Berlin
Here's a revised version. This one returns y4 in the runFilter method rather than an error and the state update was outsourced into a method.

This allows us to demonstrate how one can use the same filter algorithm and integration method with a unit delay in the feedback path. So at the bottom there's a fifth state variable "unitDelay" and a new method "tickNotZDF" which runs the same filter with one sample delay in the feedback path.

Note that I forgot to say that this filter is not a proper model of an analogue prototype. The non-linearities are placed before each stage, there's no non-linear effect on the output of each stage. If one wanted to be really picky, we would have to deploy a numerical method for each of the 4 lowpass stages as well. This however has only little gain in audible quality and it makes the code look overly complicated.

Code: Select all
`class filterZDF{public:   ... stuff ...   float iceq1;   // states in form   float iceq2;   // of current   float iceq3;   // equivalents   float iceq4;   // of capacitors   float runFilter( float in, float outEstimate, float &y1, float &y2, float &y3 )   {      float g = tan( PI * cutoff/samplerate );      float gDiv = 1.f/(1.0 + g);         float x0 = (in - resonance * outEstimate );         y1 = (g * tanh( x0 ) + iceq1 ) * gDiv;      y2 = (g * tanh( y1 ) + iceq2 ) * gDiv;      y3 = (g * tanh( y2 ) + iceq3 ) * gDiv;      float y4 = (g * tanh( y3 ) + iceq4 ) * gDiv;            return y4;   }      updateState( y1, y2, y3, y4 )   {      // trapezoidal integration         iceq1 = 2*y1 - iceq1;      iceq2 = 2*y2 - iceq2;      iceq3 = 2*y3 - iceq3;      iceq4 = 2*y4 - iceq4;   }      float tickZDF_Bisect( float in )   {      float y1, y2, y3, y4;                  // calculate lower bracket         float EstimateLow = -2.f;            y4 = runFilter( in, EstimateLow, y1, y2, y3 );            float errorLow = EstimateLow - y4;                   // calculate upper bracket            float EstimateHigh = 2.f;            y4 = runFilter( in, EstimateHigh, y1, y2, y3 );            float errorHigh = EstimateHigh - y4;                // iteratively bisect Estimates until error goes towards zero      float outEstimate = EstimateHigh;            while( fabs( outEstimate - y4 ) > 0.0001f )      {         outEstimate = ( EstimateHigh - EstimateLow ) * 0.5f;                  y4 = runFilter( in, outEstimate, y1, y2, y3 );               float newError = outEstimate - y4;                  if( sign( newError ) == sign( errorLow ) )         {            EstimateLow = outEstimate;            errorLow = newError;         }         else         {            EstimateHigh = outEstimate;            errorHigh = newError;         }      }            updateState( y1, y2, y3, outEstimate );         return outEstimate;   }      float unitDelay; // additional state for feedback delay      float tickNotZDF( float in )   {      float y1, y2, y3, y4;             y4 = runFilter( in, unitDelay, y1, y2, y3 );             updateState( y1, y2, y3, y4 );         unitDelay = y4;   }};`

I still haven't tested if this code runs at all. We're currently thinking about writing a larger tutorial around this approach, demonstrating linear solvers, Mystan's method and various iterative methods for cascaded lowpasses, state variables and Sallen-Key filters, spiced up with performance and error comparisons.
Next

Moderator: Moderators (Main)