Plug-ins, Hosts, Apps,
Hardware, Soundware
Developers
(Brands)
Videos Groups
Whats's in?
Banks & Patches
Music Search
KVR

KVR Forum » DSP and Plug-in Development
christripledot
KVRer
 Posted: Tue Apr 10, 2012 12:10 pm reply with quote
Apologies in advance for being dense. Digital filter theory does bad things to my brain, however many papers I read. Some fantastic threads on this forum have given me a kick-start into writing a zero-delay integrator-based LPF, but I'm struggling with a few concepts. I realise this is old news now, but please show mercy!

I think I'd better introduce my confusions one by one, so here's the first:

A basic integrator LPF is given by:

iin = in - buf;
buf = buf + f * iin;
// buf contains the output for this sample

Or, put another way,

iin = in - buf;
out = buf + f * iin;
buf = out;

And (according to neotec's brilliantly informative post on the subject) a bilinear integrator LPF is given by:

iin = in - buf;
out = buf + f * iin;
buf = f * iin + out;

In both cases, 'out' = buf + f * iin. But while the basic integrator simply stores the output in 'buf', ready for the next sample, the bilinear integrator looks like it performs another filtering step. What's going on here, in layman's terms? Is the 'out' in line 3 different to the 'out' in line 2?

[/code]
^ Joined: 09 Oct 2011  Member: #266362
Urs
KVRAF
 Posted: Tue Apr 10, 2012 12:49 pm reply with quote
Hmmm... in contrast to my earlier beliefs, I'd say that a basic LPF doesn't have a feedback. It has a state.

What we're usually talking about is a more complex filter structure with at least 2 poles/integrators that takes the output (or another signal source) and feeds it back into the input (or another point).

A simple example is a 4-pole ladder filter:

output = 0
input = x - res * y4;
y1 = y1 + f * (input - y1)
y2 = y2 + f * (y1 - y2)
y3 = y3 + f * (y2 - y3)
y4 = y4 + f * (y3 - y4)
output = y4

What you can easily see is that input is composed out of the actual input sample x and the previous output sample y4. This is the lag/delay/latency.

We want the y4 at the input side to be identical to the y4 at the output side. So we

output = 0
guess = y4 // previous output sample... can be done better

while( true )
{
input = x - res * guess;
yb1 = y1 + f * (input - y1)
yb2 = y2 + f * (yb1 - y2)
yb3 = y3 + f * (yb2 - y3)
yb4 = y4 + f * (yb3 - y4)
output = y4

if ( fabs(y4 - guess) > 0.00001 ) break;

guess += (y4 - guess) * 0.1; // crap method, may work though
}

y1 = yb1
y2 = yb2
y3 = yb3
y4 = yb4

So, we calculate the filter with a guess of what y4 will be, but we do not update the state just yet. Then we compare our guess to y4. If it's close enough, then we've just calculated y4 with itself. We break out of the loop and update the state. If y4 and guess are still far apart, then we create a new (hopefully better) guess and calculate the filter again.

This method is hilariously expensive. As long as the filter is linear, you can use numerical methods to create an accurate guess without actually calculating the filter once. But as soon as non-linear elements enter the structure you can't do that anymore. You need to solve the filter iteratively, using e.g. trapezoidal integration, newton raphson or whatever other method you see fit.

Just imagine anything as simple as

yb1 = tanh( y1 + f * (input - y1) )
yb2 = tanh( y2 + f * (yb1 - y2) )
yb3 = tanh( y3 + f * (yb2 - y3) )
yb4 = tanh( y4 + f * (yb3 - y4) )

Even a Newton-Raphson of this (involves building the derivate of that whole thing) is really, really difficult and probably not fast enough to run in realtime. And then, with that amount of nonlinearity you still want to oversample 4x or so...

This is where the fun starts.

I hope this helps a bit.

Urs
^ Joined: 07 Aug 2002  Member: #3542  Location: Berlin
Urs
KVRAF
 Posted: Tue Apr 10, 2012 12:50 pm reply with quote
(sorry, wife demands attention, I couldn't be bothered to write proper code in proper code tags)
^ Joined: 07 Aug 2002  Member: #3542  Location: Berlin
christripledot
KVRer
 Posted: Tue Apr 10, 2012 1:23 pm reply with quote

I assume at the end of the output prediction loop you meant 'output = y4b' etc...

Yes, I understand the principle of estimating the output before performing the filtering. I've knocked up a working ZDF 4-pole already, but I wanted to post the code to check if I'm performing some unnecessary steps, but before I get there I wanted to clear up the difference between the 'regular' and the bilinear integrators. Sticking with one pole for now, and before we even consider feedback, I wondered just what is going on with the different calculation for 'buf' in the bilinear case. Why is it that the calculation for 'out' is the same for both kinds of integrator, but the calculation for 'buf' is different? Or am I just missing something obvious?

The next thing I was going to ask about has to do with iteration (or lack of it). Using the bilinear integrator, the output prediction equation can be solved to give:

out = (buf + f * input) / (1 + f)

Looking at the maths for this, it seems to be a solid solution. Does using a bilinear integrator allow us to accurately predict the output without the need to perform repeated iterations?
^ Joined: 09 Oct 2011  Member: #266362
Urs
KVRAF
 Posted: Tue Apr 10, 2012 2:32 pm reply with quote
I never quite got that part either, or it must have fallen out of my head recently. I assume it has advantages for analytical methods. Might be able to clear that up shortly.

Nevertheless, iterative methods can't be avoided with a bunch of nonlinear elements. They can only be minimised by clever guesses.
^ Joined: 07 Aug 2002  Member: #3542  Location: Berlin
pj geerlings
KVRian
 Posted: Tue Apr 10, 2012 5:05 pm reply with quote
I have been meaning to bring this whole thing up again myself - so I'll join this discussion by offering the following (correct I hope) version of Urs' explanation ...
guess = y4 // previous output sample... can be done better

while( true ) {
input = x - res * guess;

yb1 = y1 + f * (input - y1)
yb2 = y2 + f * (yb1 - y2)
yb3 = y3 + f * (yb2 - y3)
yb4 = y4 + f * (yb3 - y4)
output = yb4

if ( fabs(yb4 - guess) < 0.00001 )
break;

guess += (yb4 - guess) * 0.1; // crap method, may work though
}

y1 = yb1
y2 = yb2
y3 = yb3
y4 = yb4

my 1st question ...
How is this :
out = (buf + f * input) / (1 + f)
integrated into the code above?

peace y'all
pj
^ Joined: 30 Nov 2003  Member: #10706  Location: Newport Beach CA USA
aciddose
KVRAF
 Posted: Tue Apr 10, 2012 5:22 pm reply with quote
there is no difference:

memory = memory + input * coefficient - memory * coefficient
or
memory = memory + (input - memory) * coefficient

http://en.wikipedia.org/wiki/Multiplication#Properties

it actually isn't really correct to think of the second version as less operations. you're just combining the operations - there are still multiple things going on in the function.

all you need to do is understand the basic properties of arithmetic and you can break down many such functions to manipulate them in more complex ways.
^ Joined: 07 Dec 2004  Member: #50793
christripledot
KVRer
 Posted: Tue Apr 10, 2012 5:36 pm reply with quote
This I know and understand. My apologies if I'm being dense, but I still think there's something screwy going on:

1. For the plain vanilla integrator:

out = buf + f * iin;
buf = out;

2. For the bilinear integrator:

out = buf + f * iin;
buf = f * iin + out;

The way I understood it, these two lines aren't a pair of simultaneous equations, they're algorithmic steps in pseudocode. Let's expand the 'out' in line 2 with its definition in line 1:

buf = f*iin + buf + f*iin

How on earth is '2*(f*iin) + buf' equivalent to 'f*iin + buf'?!?
^ Joined: 09 Oct 2011  Member: #266362
aciddose
KVRAF
 Posted: Tue Apr 10, 2012 6:02 pm reply with quote
if you don't make up weird variable names things become a lot easier to understand.

2*coefficient*delta + memory obviously just doubles the frequency. this isn't the right function.

i think what you mean is this:

temp = input + buffer * coefficient
output = buffer - temp * coefficient
buffer = temp

vs.

output = buffer - input * coefficient
buffer = output * coefficient + input

the reciprocal of the coefficient is used in the delay-less version.

(what i don't understand is why people have just 'discovered' these well-known functions recently...)
^ Joined: 07 Dec 2004  Member: #50793
christripledot
KVRer
 Posted: Tue Apr 10, 2012 6:09 pm reply with quote
All of the pseudocode I've posted has been quoted from neotec's exhaustive posts on the subject in an earlier thread (including the variable names). I thought something was up with that particular definition of the bilinear integrator; it's nice to have it confirmed.

Perhaps I've misunderstood the context in the first place. All I really wanted to know in the first place is: is there any difference in using 'bilinear' integrators (whatever they are) to build a low-pass as opposed to the usual integrator definition?

I'm guessing not; perhaps it just has to do with having an equation that can be rearranged to more easily solve the feedback when building a resonant filter?

EDIT:

Thanks, just saw your edit. Yes, either I missed a 'minus' sign or there was a typo in neotec's code. Now I see quite clearly that they are equivalent.

Re: sudden discovery of these functions:
Well, it's only in recent days that I've tried my hand at writing a resonant filter, and this forum is one of the few places with decent explanations that don't involve heavyweight mathematical notation. I'm no idiot when it comes to maths, but I'm not familiar with the notation, so I really appreciate there being so many helpful folk here willing to discuss it in plain English
^ Joined: 09 Oct 2011  Member: #266362
Caco
KVRian
 Posted: Tue Apr 10, 2012 11:04 pm reply with quote
Urs wrote:

Even a Newton-Raphson of this (involves building the derivate of that whole thing) is really, really difficult and probably not fast enough to run in realtime. And then, with that amount of nonlinearity you still want to oversample 4x or so...

This is where the fun starts.

I hope this helps a bit.

Urs

Another fun property of all this is that the more you oversample then the fewer iterations you need to do so oversampling ends up being really cheap

Generally I haven't found a huge improvement in the sound though moving to delayless filters in my code. It certainly helps but I only really notice major improvements in specific situations, such as when modulating the filter quickly, so it doesn't suddenly turn an average filter into some super analog sounding emulation. A rubbish filter made delayless still sounds rubbish, a good filter made delayless can sound better
^ Joined: 25 Apr 2005  Member: #66287
Urs
KVRAF
 Posted: Wed Apr 11, 2012 1:06 am reply with quote
Caco wrote:
Another fun property of all this is that the more you oversample then the fewer iterations you need to do so oversampling ends up being really cheap

Unless your prediction comes up with a correct initial guess most of the time
Quote:
Generally I haven't found a huge improvement in the sound though moving to delayless filters in my code. It certainly helps but I only really notice major improvements in specific situations, such as when modulating the filter quickly, so it doesn't suddenly turn an average filter into some super analog sounding emulation. A rubbish filter made delayless still sounds rubbish, a good filter made delayless can sound better

Very, very true!
^ Joined: 07 Aug 2002  Member: #3542  Location: Berlin
cheppner
KVRist
 Posted: Wed Apr 11, 2012 3:29 am reply with quote
Well, let me share some additional insights

What an Integrator does, is, it integrates. Many times we can simply assume, that in order to get the integral we just have to sum up our values.

This is called Trapezoidal Rule and it gives you an 3rd order error.

In fact, a linear interpolator looks like this:
out = buf + f * in
buf = out

And, afaik a bilinear interpolator looks like this:
tmp = f/2*in+buf
out = tmp+buf
buf = tmp

Now let's look at our integrators for several steps (assuming f = 1 and buffer=0 for a start):
the linear one gives f*in1, f*(in1+in2), f*(in1+in2+in3), …
the bilinear one gives f/2*in1, f*in1+f/2*in2, f*(in1+in2)+f/2*in3, …

In other words: It only integrates HALF the input sample in a step or it integrates to the middle of the sample.

Then there is a different view. What we are ultimately doing is transferring analog circuit equations to the digital domain (s- to z-plane). The question often is, how do we translate a capacitor/integrator? One answer is, with a linear interpolator or a bilinear one where the bilinear one is equivalent to using the Bilinear transform.
The latter matches amplitude and phase response of the filter, giving you an advantage in many cases.

I hope this clears up where the difference lies (in time and frequency-view).

Cheers,
- Clemens

BTW: I recommend this super-ancient lecture series
^ Joined: 20 Aug 2010  Member: #237880
Caco
KVRian
 Posted: Wed Apr 11, 2012 3:43 am reply with quote
Urs wrote:
Caco wrote:
Another fun property of all this is that the more you oversample then the fewer iterations you need to do so oversampling ends up being really cheap

Unless your prediction comes up with a correct initial guess most of the time

Cool, at the moment I average around four iterations at most filter settings but the iterations creeps up quite a bit at extreme resonance and cutoffs though.
^ Joined: 25 Apr 2005  Member: #66287
Urs
KVRAF
 Posted: Wed Apr 11, 2012 3:57 am reply with quote
Caco wrote:
Urs wrote:
Caco wrote:
Another fun property of all this is that the more you oversample then the fewer iterations you need to do so oversampling ends up being really cheap

Unless your prediction comes up with a correct initial guess most of the time

Cool, at the moment I average around four iterations at most filter settings but the iterations creeps up quite a bit at extreme resonance and cutoffs though.

Yes, high resonance becomes a cpu killer.

But take an example: In a synthesizer you actually know what's going into the filter (oscillator waveforms, white noise etc.). And once the filter goes into selfoscillation you also know what's happening. So you can take these cases into account when preparing your initial guess.

Urs
^ Joined: 07 Aug 2002  Member: #3542  Location: Berlin
 KVR Forum Index » DSP and Plug-in Development All times are GMT - 8 Hours Printable version Page 1 of 4 Goto page 1, 2, 3, 4  Next