Cheap non-linear zero-delay filters

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Ok, so I didn't want to pollute that other thread, so I'm starting a new one.

In that other thread I mentioned you can linearize a filter around it's known state and get reasonable results for not much overhead. Here's some discussion of the topic. [edit: also, while I said "zero-delay" in the thread title, I'm really doing "topology-preservation" too; you don't need to, if you don't want to, but it kinda helps if you want to mimic analog non-linearities]

I'm not going to concern myself with BLT other than note that since this is based on trapezoidal integration, and since I'm using TDF2 BLT integrators as building blocks, we can tune filters exactly in cases where the non-linearities are negligible (which is to say that I'm not going to discuss what happens when the non-linearities force the cutoff to change and we're in a seriously warped region of the BLT frequency scale). Namely, the form of integrator building block I'm using is:

Code: Select all

  y[n+1] = s[n] + f*x[n]
  s[n+1] = s[n] + 2*f*x[n]

 where
  x[n], y[n] and s[n] are input, output and state

 and the tuning that maps the normalized analog frequency 1
 to whatever frequency we want is
  f = tan(M_PI * freq / samplerate)
Few useful observations: if we think of y[n] and s[n] as continuous piece-wise linear functions y(n) and s(n), then y(n)=s(n-0.5). So the output is essentially a half-sample delayed version of the state. We're going to abuse this.

To keep things simple and the math short, I'm going to use a one-pole OTA lowpass (straight from the LM13700 datasheet) as and example and ignore all the practical details like signal levels (eg we'd really need need a ton of 1/vT factors) and auxiliary circuitry. So we have the following model for the OTA (the derivation or accuracy of which is not important) and capacitor:

Code: Select all

 iOta = iCtl * tanh(vIn)
 dVcap/dt = 1/C * iOta
And with feedback (since we want lowpass and not an integrator):

Code: Select all

 dVcap/dt = iCtl/C * tanh( vIn - vCap )
So if we discretize, we get:

Code: Select all

 y[n+1] = s[n] + f * tanh( x[n] - y[n+1] )
This looks like it needs a feedback solver, but here's the thing: Let's rewrite the OTA in terms of transconductance gM = iOut/vIn and we get:

Code: Select all

 gM(vIn) = iCtl * ( tanh( vIn ) / vIn )
or:
 iOut = iCtl * T(vIn) * vIn
where:
 T(x) = tanh(x) / x 
[footnote: tanh(x)/x=1 in the limit x -> 0, but it's easy to adapt most tanh(x) approximations to return tanh(x)/x instead and avoid this issue]
[footnote: you can do this for practically any type of memoryless non-linearity, even when "transconductance" as such wouldn't be meaningful]

This leads to:

Code: Select all

 y[n+1] = s[n] + f * T(x[n] - y[n+1]) * (x[n] - y[n+1])
The point of this exercise is that we can now treat the non-linear transconductance and the actual input separately. So what we can do, is combine Euler method for the non-linearity with trapezoidal method for the linear part! In alternative interpretation we delay the transconductance by half a sample. Recall that s[n] = y[n+0.5]. For consistency, use x[n-0.5]=0.5*(x[n]+x[n-1]) for the actual input signal; everything else is available from one of the filter states:

Code: Select all

 y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
Now the feedback dependence is linear, so we can implement this as::

Code: Select all

 t = T(0.5*(x[n] + x[n-1]) - s[n])
 y[n+1] = (s[n] + f*t*x[n]) / (1 + f*t)
 s[n+1] = s[n] + 2*f*t*x[n]
Note that technically it this only reasonable when the signal changes slowly. That's true at audible frequencies if we oversample to avoid aliasing. With higher gain or nastier non-linearities it deviates more, but so does aliasing increase, and once you oversample you increase both again. In practice the above works remarkably well for almost anything I've thrown at it so far (transistor ladders, diode ladders, OTA cascades... you name it).

On paper it's less accurate than fitting a linear curve directly to the tangent of the tanh() for slowly changing signals, since we are fitting a linear curve from the origin to the known operating point, but unlike the tangent fitting method, this tolerates violations of the "signal changes slowly" assumption much better; we might feed a bit too much or two little current, but most of the time the results are relatively sane (which cannot be said about tangent fitting, which can run into crazy paradoxes). You can certainly use this directly and in most cases with a bit of oversampling (eg 4 times or so usually work sensibly for reasonable input levels) it sounds quite fine (and when I say "quite fine" I mean "certainly a lot better than a traditional sequential fudge-factored filter").

Anyway, if you're NOT happy with the results (remember we're only first order as far as the non-linearities go), we can treat the calculated value as a prediction, and apply a correction step. Sensible approach would be a variation of Heun's method take the new state (and x[n+0.5]; you need one step lookahead) and recalculate the "transconductances" then redo the linear solver, then average the resulting state with the original prediction (and likewise for outputs). Since the error of the "correction" step should be opposite to the error of the "prediction" step, they should mostly cancel. As far as I can tell, this is sufficient to make it a true second-order method (don't feel like doing formal error analysis, sorry).

In practice the correction step roughly doubles the CPU use. In cases where the prediction works well (and most of the time it does), it's probably better idea to double the oversampling instead, but if you hit an obviously degenerate case, then the above could solve the issue.
Last edited by mystran on Tue Jan 08, 2013 12:10 pm, edited 4 times in total.

Post

For the TL;DR folks, here's transistor ladder using the above (without correction) to play with.. note that this assume straight and linear feedback loop which is not really a very good model in practice ;)

Code: Select all

//// LICENSE TERMS: Copyright 2012 Teemu Voipio
// 
// You can use this however you like for pretty much any purpose,
// as long as you don't claim you wrote it. There is no warranty.
//
// Distribution of substantial portions of this code in source form
// must include this copyright notice and list of conditions.
//

// input delay and state for member variables
double zi;
double s[4] = { 0, 0, 0, 0 };

// tanh(x)/x approximation, flatline at very high inputs
// so might not be safe for very large feedback gains
// [limit is 1/15 so very large means ~15 or +23dB]
double tanhXdX(double x)
{
    double a = x*x;
    // IIRC I got this as Pade-approx for tanh(sqrt(x))/sqrt(x) 
    return ((a + 105)*a + 945) / ((15*a + 420)*a + 945);
}

// cutoff as normalized frequency (eg 0.5 = Nyquist)
// resonance from 0 to 1, self-oscillates at settings over 0.9
void transistorLadder(
    double cutoff, double resonance,
    double * in, double * out, unsigned nsamples)
{
    // tuning and feedback
    double f = tan(M_PI * cutoff);
    double r = (40.0/9.0) * resonance;

    for(unsigned n = 0; n < nsamples; ++n)
    {
        // input with half delay, for non-linearities
        double ih = 0.5 * (in[n] + zi); zi = in[n];

        // evaluate the non-linear gains
        double t0 = tanhXdX(ih - r * s[3]);
        double t1 = tanhXdX(s[0]);
        double t2 = tanhXdX(s[1]);
        double t3 = tanhXdX(s[2]);
        double t4 = tanhXdX(s[3]);

        // g# the denominators for solutions of individual stages
        double g0 = 1 / (1 + f*t1), g1 = 1 / (1 + f*t2);
        double g2 = 1 / (1 + f*t3), g3 = 1 / (1 + f*t4);
        
        // f# are just factored out of the feedback solution
        double f3 = f*t3*g3, f2 = f*t2*g2*f3, f1 = f*t1*g1*f2, f0 = f*t0*g0*f1;

        // solve feedback 
        double y3 = (g3*s[3] + f3*g2*s[2] + f2*g1*s[1] + f1*g0*s[0] + f0*in[n]) / (1 + r*f0);

        // then solve the remaining outputs (with the non-linear gains here)
        double xx = t0*(in[n] - r*y3);
        double y0 = t1*g0*(s[0] + f*xx);
        double y1 = t2*g1*(s[1] + f*y0);
        double y2 = t3*g2*(s[2] + f*y1);

        // update state
        s[0] += 2*f * (xx - y0);
        s[1] += 2*f * (y0 - y1);
        s[2] += 2*f * (y1 - y2);
        s[3] += 2*f * (y2 - t4*y3);

        out[n] = y3;
    }
}
[/size]
Last edited by mystran on Sat Dec 13, 2014 11:34 am, edited 2 times in total.

Post

Oh and AQ: if you use this you own me a Poly-Ana. ;)

Post

Some additional notes:
  • if you want to filter stereo signals, you can treat the signals and states as vectors, and evaluate "transconductances" with the Pythagorean norm sqrt(L*L+R*R) which ofcourse is specifically why I was approximating tanh(sqrt(x))/sqrt(x) in the first place... this isn't really circuit modelling anymore (good luck with analog version) but unlike a dual-mono implementation, distortion tends to localize at the input signals rather than the individual speakers.. I personally like the effect and I've used it for traditional filters before I even touched any zero-delay stuff (eg Sweep uses it for example; my dev-builds for new version already use a better "OTA SVF" model with the same approach to stereo)
  • if you still feel you need to iterate (which personally think is quite futile if you apply the correction step, since trapezoidal is only second-order anyway), then my educated guess is that iteration on the transconductance is likely to converge faster than iteration on the signal itself... but like I said, I don't see the point without a better integrator
  • note that without the correction step, the cost here is almost exactly the cost of solving a linear zero-delay filter + the cost of evaluating the non-linearities once.

Post

I tried the filter in my polysynth. Hot damn. I think it's the best sounding thing in a VST so far.

Here's a build: http://dl.dropbox.com/u/13976481/MauSynth_BETA_RC1.dll

Post

Thanks for all the detailed explenations! The way in which you make it sound so easy though - is kinda scary :o

My one brain-cell is learning to do simple SSE and the other one is taking a nap :ud:

Your 4L2 plug is pretty dang useful as well.

Regards
Andrew

Post

Wow this filter tunes spot on. I was able to use full resonance while tracking the fundamental frequency with the MauSynth and it stayed sharp from the bass to the treble. Nice work guys :)

Post

mystran wrote:On paper it's less accurate than fitting a linear curve directly to the tangent of the tanh() for slowly changing signals, since we are fitting a linear curve from the origin to the known operating point, but unlike the tangent fitting method, this tolerates violations of the "signal changes slowly" assumption much better; we might feed a bit too much or two little current, but most of the time the results are relatively sane (which cannot be said about tangent fitting, which can run into crazy paradoxes).
Nice! (well, I didn't verify or try this myself, but from what you're saying seems really nice).

Regards,
{Z}

Post

Since I've been asked about terms of use, I added a license to the example code above. It's not terribly restrictive.

edit: also added a remark in the original post about "topology-preserving" since that's what it is.. when I wrote the thing originally, I didn't realize anyone would want to do "zero-delay" without doing topology-preserving, but apparently that's also possible :P

Post

Thanks ! There are a few x[x] which should be x[n].

Also, something doesn't make sense to me: if

Code: Select all

x[n-0.5]=0.5*(x[n]+x[n-1])
and

Code: Select all

y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
then shouldn't you have

Code: Select all

 t = T(0.5*(x[n] + x[n-1]) - s[n] )
instead of

Code: Select all

t = T(s[n] + 0.5*(x[n] + x[n-1]) 

Post

Big Tick wrote:Thanks ! There are a few x[x] which should be x[n].

Also, something doesn't make sense to me: if

Code: Select all

x[n-0.5]=0.5*(x[n]+x[n-1])
and

Code: Select all

y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
then shouldn't you have

Code: Select all

 t = T(0.5*(x[n] + x[n-1]) - s[n] )
instead of

Code: Select all

t = T(s[n] + 0.5*(x[n] + x[n-1]) 
Yes the sign of s[n] was obviously wrong. Corrected in OP, thanks for noticing.

That's what you get for proof-reading your own text. :P

Regading x[x] I found one and correct it.

Post

Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?

Post

Big Tick wrote:Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?
Good Q -> sounds sneaky, but wouldn't that be dependant on your oversampling technique?

Random add-on: Since there is the word "cheap" in the heading, you could use 1 of these 2 approximations for tan(x):
//Pade tan(x) nr1
A = -15*x+x^3;
B = 3*(-5+2*x^2);
tan_out = A/B;

//Pade tan(x) nr2
A = 5*(-21*x+2*x^3);
B = 105-45*x^2+x^4;
tan_out = A/B;
I generally use nr2 - works well - even without oversampling. Haven't checked it out with mystran filter above though. nr1 might track equally as well if you oversample.

Just my 2cents.

Regards
Andrew

Post

Big Tick wrote:Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?
You mean like 2x oversample input, instead of taking average?

I've not tried it but it sounds rather useless. In the worst-case the high-freq attenuation from averaging might reduce aliasing slightly. In the best case (when you're oversampling enough that aliasing isn't an issue) the signal is smooth enough that you could forget the delay and just use the input sample (I've tried both ways and audible difference is generally speaking none).

Mostly I do it for theoretical consistency (ie looks good on paper). It's certainly not a critical component of the method, so I wouldn't waste too many cycles on it (you can always find something else to improve instead).

Oh and remember that aliasing in a non-linear recursive filter isn't really just an audible problem: once you start getting significant aliasing, your filter state essentially becomes bogus, and you enter a garbage-in-garbage-out feedback loop. In my book it doesn't really matter how accurately we recycle garbage (not in DSP anyway).

Post

Ichad.c wrote: Random add-on: Since there is the word "cheap" in the heading, you could use 1 of these 2 approximations for tan(x):
//Pade tan(x) nr1
A = -15*x+x^3;
B = 3*(-5+2*x^2);
tan_out = A/B;

//Pade tan(x) nr2
A = 5*(-21*x+2*x^3);
B = 105-45*x^2+x^4;
tan_out = A/B;
I used tan(x) and moved it out of the per-sample loop in the example just for simplicity. That's not really what I use in practice; in fact MSVC CRT tan() doesn't even work right when I use truncation as rounding mode.

On top of that, you probably have an exponentiation inside the tangent, so what you really want is tan(pow(2,octaves+log2_basetune)) or something sufficiently similar. If you run this once per sample (and I certainly do), you almost certainly want to optimize the whole thing in one way or another, but that's kinda different topic. :)

edit: I like your approach though.. Pade approximations are generally what I always try first whenever I need to approximate something nasty. ;)

edit2: I can't resist commenting on the "different topic" just enough to mention that in general the accuracy requirements for tuning should be optimized in terms of octaves (ie on the logarithmic scale) because the perception of pitch is such that 10 cents deviation at 100Hz sounds roughly as bad as 10 cents deviation at 2kHz and the absolute errors are quite different. ;)

Return to “DSP and Plugin Development”