Plug-ins, Hosts, Apps,
Hardware, Soundware
Developers
(Brands)
Videos Groups
Whats's in?
Banks & Patches
Music Search
KVR

KVR Forum » DSP and Plug-in Development
Cheap non-linear zero-delay filters
Goto page 1, 2, 3 ... 14, 15, 16  Next
mystran
KVRAF
 Posted: Thu May 17, 2012 10:14 am reply with quote
Ok, so I didn't want to pollute that other thread, so I'm starting a new one.

In that other thread I mentioned you can linearize a filter around it's known state and get reasonable results for not much overhead. Here's some discussion of the topic. [edit: also, while I said "zero-delay" in the thread title, I'm really doing "topology-preservation" too; you don't need to, if you don't want to, but it kinda helps if you want to mimic analog non-linearities]

I'm not going to concern myself with BLT other than note that since this is based on trapezoidal integration, and since I'm using TDF2 BLT integrators as building blocks, we can tune filters exactly in cases where the non-linearities are negligible (which is to say that I'm not going to discuss what happens when the non-linearities force the cutoff to change and we're in a seriously warped region of the BLT frequency scale). Namely, the form of integrator building block I'm using is:

y[n+1] = s[n] + f*x[n]
s[n+1] = s[n] + 2*f*x[n]

where
x[n], y[n] and s[n] are input, output and state

and the tuning that maps the normalized analog frequency 1
to whatever frequency we want is
f = tan(M_PI * freq / samplerate)

Few useful observations: if we think of y[n] and s[n] as continuous piece-wise linear functions y(n) and s(n), then y(n)=s(n-0.5). So the output is essentially a half-sample delayed version of the state. We're going to abuse this.

To keep things simple and the math short, I'm going to use a one-pole OTA lowpass (straight from the LM13700 datasheet) as and example and ignore all the practical details like signal levels (eg we'd really need need a ton of 1/vT factors) and auxiliary circuitry. So we have the following model for the OTA (the derivation or accuracy of which is not important) and capacitor:
iOta = iCtl * tanh(vIn)
dVcap/dt = 1/C * iOta

And with feedback (since we want lowpass and not an integrator):
dVcap/dt = iCtl/C * tanh( vIn - vCap )

So if we discretize, we get:
y[n+1] = s[n] + f * tanh( x[n] - y[n+1] )

This looks like it needs a feedback solver, but here's the thing: Let's rewrite the OTA in terms of transconductance gM = iOut/vIn and we get:

gM(vIn) = iCtl * ( tanh( vIn ) / vIn )
or:
iOut = iCtl * T(vIn) * vIn
where:
T(x) = tanh(x) / x

[footnote: tanh(x)/x=1 in the limit x -> 0, but it's easy to adapt most tanh(x) approximations to return tanh(x)/x instead and avoid this issue]
[footnote: you can do this for practically any type of memoryless non-linearity, even when "transconductance" as such wouldn't be meaningful]

y[n+1] = s[n] + f * T(x[n] - y[n+1]) * (x[n] - y[n+1])

The point of this exercise is that we can now treat the non-linear transconductance and the actual input separately. So what we can do, is combine Euler method for the non-linearity with trapezoidal method for the linear part! In alternative interpretation we delay the transconductance by half a sample. Recall that s[n] = y[n+0.5]. For consistency, use x[n-0.5]=0.5*(x[n]+x[n-1]) for the actual input signal; everything else is available from one of the filter states:
y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])

Now the feedback dependence is linear, so we can implement this as::
t = T(0.5*(x[n] + x[n-1]) - s[n])
y[n+1] = (s[n] + f*t*x[n]) / (1 + f*t)
s[n+1] = s[n] + 2*f*t*x[n]

Note that technically it this only reasonable when the signal changes slowly. That's true at audible frequencies if we oversample to avoid aliasing. With higher gain or nastier non-linearities it deviates more, but so does aliasing increase, and once you oversample you increase both again. In practice the above works remarkably well for almost anything I've thrown at it so far (transistor ladders, diode ladders, OTA cascades... you name it).

On paper it's less accurate than fitting a linear curve directly to the tangent of the tanh() for slowly changing signals, since we are fitting a linear curve from the origin to the known operating point, but unlike the tangent fitting method, this tolerates violations of the "signal changes slowly" assumption much better; we might feed a bit too much or two little current, but most of the time the results are relatively sane (which cannot be said about tangent fitting, which can run into crazy paradoxes). You can certainly use this directly and in most cases with a bit of oversampling (eg 4 times or so usually work sensibly for reasonable input levels) it sounds quite fine (and when I say "quite fine" I mean "certainly a lot better than a traditional sequential fudge-factored filter").

Anyway, if you're NOT happy with the results (remember we're only first order as far as the non-linearities go), we can treat the calculated value as a prediction, and apply a correction step. Sensible approach would be a variation of Heun's method take the new state (and x[n+0.5]; you need one step lookahead) and recalculate the "transconductances" then redo the linear solver, then average the resulting state with the original prediction (and likewise for outputs). Since the error of the "correction" step should be opposite to the error of the "prediction" step, they should mostly cancel. As far as I can tell, this is sufficient to make it a true second-order method (don't feel like doing formal error analysis, sorry).

In practice the correction step roughly doubles the CPU use. In cases where the prediction works well (and most of the time it does), it's probably better idea to double the oversampling instead, but if you hit an obviously degenerate case, then the above could solve the issue.
----
<- my plugins | my music -> @Soundcloud

Last edited by mystran on Tue Jan 08, 2013 12:10 pm; edited 4 times in total
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
mystran
KVRAF
 Posted: Thu May 17, 2012 10:18 am reply with quote
For the TL;DR folks, here's transistor ladder using the above (without correction) to play with.. note that this assume straight and linear feedback loop which is not really a very good model in practice

//
// You can use this however you like for pretty much any purpose,
// as long as you don't claim you wrote it. There is no warranty.
//
// Distribution of substantial portions of this code in source form
// must include this copyright notice and list of conditions.
//

// input delay and state for member variables
double z1;
double s[4] = { 0, 0, 0, 0 };

// tanh(x)/x approximation, flatline at very high inputs
// so might not be safe for very large feedback gains
// [limit is 1/15 so very large means ~15 or +23dB]
double tanhXdX(double x)
{
double a = x*x;
// IIRC I got this as Pade-approx for tanh(sqrt(x))/sqrt(x)
return ((a + 105)*a + 945) / ((15*a + 420)*a + 945);
}

// cutoff as normalized frequency (eg 0.5 = Nyquist)
// resonance from 0 to 1, self-oscillates at settings over 0.9
double cutoff, double resonance,
double * in, double * out, unsigned nsamples)
{
// tuning and feedback
double f = tan(M_PI * cutoff);
double r = (40.0/9.0) * resonance;

for(unsigned n = 0; n < nsamples; ++n)
{
// input with half delay, for non-linearities
double ih = 0.5 * (in[n] + zi); zi = in[n];

// evaluate the non-linear gains
double t0 = tanhXdX(ih - r * s[3]);
double t1 = tanhXdX(s[0]);
double t2 = tanhXdX(s[1]);
double t3 = tanhXdX(s[2]);
double t4 = tanhXdX(s[3]);

// g# the denominators for solutions of individual stages
double g0 = 1 / (1 + f*t1), g1 = 1 / (1 + f*t2);
double g2 = 1 / (1 + f*t3), g3 = 1 / (1 + f*t4);

// f# are just factored out of the feedback solution
double f3 = f*t3*g3, f2 = f*t2*g2*f3, f1 = f*t1*g1*f2, f0 = f*t0*g0*f1;

// solve feedback
double y3 = (g3*s[3] + f3*g2*s[2] + f2*g1*s[1] + f1*g0*s[0] + f0*in[n]) / (1 + r*f0);

// then solve the remaining outputs (with the non-linear gains here)
double xx = t0*(in[n] - r*y3);
double y0 = t1*g0*(s[0] + f*xx);
double y1 = t2*g1*(s[1] + f*y0);
double y2 = t3*g2*(s[2] + f*y1);

// update state
s[0] += 2*f * (xx - y0);
s[1] += 2*f * (y0 - y1);
s[2] += 2*f * (y1 - y2);
s[3] += 2*f * (y2 - t4*y3);

out[n] = y3;
}
}
----
<- my plugins | my music -> @Soundcloud

Last edited by mystran on Fri May 18, 2012 1:17 am; edited 1 time in total
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
mystran
KVRAF
 Posted: Thu May 17, 2012 10:47 am reply with quote
Oh and AQ: if you use this you own me a Poly-Ana.
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
mystran
KVRAF
 Posted: Thu May 17, 2012 11:56 am reply with quote

• if you want to filter stereo signals, you can treat the signals and states as vectors, and evaluate "transconductances" with the Pythagorean norm sqrt(L*L+R*R) which ofcourse is specifically why I was approximating tanh(sqrt(x))/sqrt(x) in the first place... this isn't really circuit modelling anymore (good luck with analog version) but unlike a dual-mono implementation, distortion tends to localize at the input signals rather than the individual speakers.. I personally like the effect and I've used it for traditional filters before I even touched any zero-delay stuff (eg Sweep uses it for example; my dev-builds for new version already use a better "OTA SVF" model with the same approach to stereo)

• if you still feel you need to iterate (which personally think is quite futile if you apply the correction step, since trapezoidal is only second-order anyway), then my educated guess is that iteration on the transconductance is likely to converge faster than iteration on the signal itself... but like I said, I don't see the point without a better integrator

• note that without the correction step, the cost here is almost exactly the cost of solving a linear zero-delay filter + the cost of evaluating the non-linearities once.
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
mr.bungle
KVRist
 Posted: Thu May 17, 2012 12:36 pm reply with quote
I tried the filter in my polysynth. Hot damn. I think it's the best sounding thing in a VST so far.

Here's a build: http://dl.dropbox.com/u/13976481/MauSynth_BETA_RC1.dll
^ Joined: 25 Jul 2007  Member: #156204  Location: Finland
KVRist
 Posted: Thu May 17, 2012 12:53 pm reply with quote
Thanks for all the detailed explenations! The way in which you make it sound so easy though - is kinda scary

My one brain-cell is learning to do simple SSE and the other one is taking a nap

Your 4L2 plug is pretty dang useful as well.

Regards
Andrew
^ Joined: 08 Feb 2012  Member: #274678  Location: South - Africa
camsr
KVRAF
 Posted: Thu May 17, 2012 11:45 pm reply with quote
Wow this filter tunes spot on. I was able to use full resonance while tracking the fundamental frequency with the MauSynth and it stayed sharp from the bass to the treble. Nice work guys
----
^ Joined: 16 Feb 2005  Member: #58183
Z1202
KVRist
 Posted: Fri May 18, 2012 12:11 am reply with quote
mystran wrote:
On paper it's less accurate than fitting a linear curve directly to the tangent of the tanh() for slowly changing signals, since we are fitting a linear curve from the origin to the known operating point, but unlike the tangent fitting method, this tolerates violations of the "signal changes slowly" assumption much better; we might feed a bit too much or two little current, but most of the time the results are relatively sane (which cannot be said about tangent fitting, which can run into crazy paradoxes).
Nice! (well, I didn't verify or try this myself, but from what you're saying seems really nice).

Regards,
{Z}
^ Joined: 11 Apr 2002  Member: #2472
mystran
KVRAF
 Posted: Fri May 18, 2012 1:18 am reply with quote

edit: also added a remark in the original post about "topology-preserving" since that's what it is.. when I wrote the thing originally, I didn't realize anyone would want to do "zero-delay" without doing topology-preserving, but apparently that's also possible
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
Big Tick
KVRAF
 Posted: Fri May 18, 2012 3:50 am reply with quote
Thanks ! There are a few x[x] which should be x[n].

Also, something doesn't make sense to me: if x[n-0.5]=0.5*(x[n]+x[n-1])and
y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
then shouldn't you have
t = T(0.5*(x[n] + x[n-1]) - s[n] )
t = T(s[n] + 0.5*(x[n] + x[n-1])
^ Joined: 28 May 2001  Member: #586  Location: New York, NY
mystran
KVRAF
 Posted: Fri May 18, 2012 3:56 am reply with quote
Big Tick wrote:
Thanks ! There are a few x[x] which should be x[n].

Also, something doesn't make sense to me: if x[n-0.5]=0.5*(x[n]+x[n-1])and
y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
then shouldn't you have
t = T(0.5*(x[n] + x[n-1]) - s[n] )
t = T(s[n] + 0.5*(x[n] + x[n-1])

Yes the sign of s[n] was obviously wrong. Corrected in OP, thanks for noticing.

Regading x[x] I found one and correct it.
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
Big Tick
KVRAF
 Posted: Fri May 18, 2012 4:04 am reply with quote
Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?
^ Joined: 28 May 2001  Member: #586  Location: New York, NY
KVRist
 Posted: Fri May 18, 2012 5:58 am reply with quote
Big Tick wrote:
Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?

Good Q -> sounds sneaky, but wouldn't that be dependant on your oversampling technique?

Random add-on: Since there is the word "cheap" in the heading, you could use 1 of these 2 approximations for tan(x):

Quote:
A = -15*x+x^3;
B = 3*(-5+2*x^2);
tan_out = A/B;

A = 5*(-21*x+2*x^3);
B = 105-45*x^2+x^4;
tan_out = A/B;

I generally use nr2 - works well - even without oversampling. Haven't checked it out with mystran filter above though. nr1 might track equally as well if you oversample.

Just my 2cents.

Regards
Andrew
^ Joined: 08 Feb 2012  Member: #274678  Location: South - Africa
mystran
KVRAF
 Posted: Fri May 18, 2012 6:01 am reply with quote
Big Tick wrote:
Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?

You mean like 2x oversample input, instead of taking average?

I've not tried it but it sounds rather useless. In the worst-case the high-freq attenuation from averaging might reduce aliasing slightly. In the best case (when you're oversampling enough that aliasing isn't an issue) the signal is smooth enough that you could forget the delay and just use the input sample (I've tried both ways and audible difference is generally speaking none).

Mostly I do it for theoretical consistency (ie looks good on paper). It's certainly not a critical component of the method, so I wouldn't waste too many cycles on it (you can always find something else to improve instead).

Oh and remember that aliasing in a non-linear recursive filter isn't really just an audible problem: once you start getting significant aliasing, your filter state essentially becomes bogus, and you enter a garbage-in-garbage-out feedback loop. In my book it doesn't really matter how accurately we recycle garbage (not in DSP anyway).
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
mystran
KVRAF
 Posted: Fri May 18, 2012 6:20 am reply with quote

Random add-on: Since there is the word "cheap" in the heading, you could use 1 of these 2 approximations for tan(x):

Quote:
A = -15*x+x^3;
B = 3*(-5+2*x^2);
tan_out = A/B;

A = 5*(-21*x+2*x^3);
B = 105-45*x^2+x^4;
tan_out = A/B;

I used tan(x) and moved it out of the per-sample loop in the example just for simplicity. That's not really what I use in practice; in fact MSVC CRT tan() doesn't even work right when I use truncation as rounding mode.

On top of that, you probably have an exponentiation inside the tangent, so what you really want is tan(pow(2,octaves+log2_basetune)) or something sufficiently similar. If you run this once per sample (and I certainly do), you almost certainly want to optimize the whole thing in one way or another, but that's kinda different topic.

edit: I like your approach though.. Pade approximations are generally what I always try first whenever I need to approximate something nasty.

edit2: I can't resist commenting on the "different topic" just enough to mention that in general the accuracy requirements for tuning should be optimized in terms of octaves (ie on the logarithmic scale) because the perception of pitch is such that 10 cents deviation at 100Hz sounds roughly as bad as 10 cents deviation at 2kHz and the absolute errors are quite different.
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
 KVR Forum Index » DSP and Plug-in Development All times are GMT - 8 Hours Printable version Page 1 of 16 Goto page 1, 2, 3 ... 14, 15, 16  Next