In that other thread I mentioned you can linearize a filter around it's known state and get reasonable results for not much overhead. Here's some discussion of the topic. [edit: also, while I said "zero-delay" in the thread title, I'm really doing "topology-preservation" too; you don't need to, if you don't want to, but it kinda helps if you want to mimic analog non-linearities]
I'm not going to concern myself with BLT other than note that since this is based on trapezoidal integration, and since I'm using TDF2 BLT integrators as building blocks, we can tune filters exactly in cases where the non-linearities are negligible (which is to say that I'm not going to discuss what happens when the non-linearities force the cutoff to change and we're in a seriously warped region of the BLT frequency scale). Namely, the form of integrator building block I'm using is:
Code: Select all
y[n+1] = s[n] + f*x[n]
s[n+1] = s[n] + 2*f*x[n]
where
x[n], y[n] and s[n] are input, output and state
and the tuning that maps the normalized analog frequency 1
to whatever frequency we want is
f = tan(M_PI * freq / samplerate)
To keep things simple and the math short, I'm going to use a one-pole OTA lowpass (straight from the LM13700 datasheet) as and example and ignore all the practical details like signal levels (eg we'd really need need a ton of 1/vT factors) and auxiliary circuitry. So we have the following model for the OTA (the derivation or accuracy of which is not important) and capacitor:
Code: Select all
iOta = iCtl * tanh(vIn)
dVcap/dt = 1/C * iOta
Code: Select all
dVcap/dt = iCtl/C * tanh( vIn - vCap )
Code: Select all
y[n+1] = s[n] + f * tanh( x[n] - y[n+1] )
Code: Select all
gM(vIn) = iCtl * ( tanh( vIn ) / vIn )
or:
iOut = iCtl * T(vIn) * vIn
where:
T(x) = tanh(x) / x
[footnote: you can do this for practically any type of memoryless non-linearity, even when "transconductance" as such wouldn't be meaningful]
This leads to:
Code: Select all
y[n+1] = s[n] + f * T(x[n] - y[n+1]) * (x[n] - y[n+1])
Code: Select all
y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
Code: Select all
t = T(0.5*(x[n] + x[n-1]) - s[n])
y[n+1] = (s[n] + f*t*x[n]) / (1 + f*t)
s[n+1] = s[n] + 2*f*t*x[n]
On paper it's less accurate than fitting a linear curve directly to the tangent of the tanh() for slowly changing signals, since we are fitting a linear curve from the origin to the known operating point, but unlike the tangent fitting method, this tolerates violations of the "signal changes slowly" assumption much better; we might feed a bit too much or two little current, but most of the time the results are relatively sane (which cannot be said about tangent fitting, which can run into crazy paradoxes). You can certainly use this directly and in most cases with a bit of oversampling (eg 4 times or so usually work sensibly for reasonable input levels) it sounds quite fine (and when I say "quite fine" I mean "certainly a lot better than a traditional sequential fudge-factored filter").
Anyway, if you're NOT happy with the results (remember we're only first order as far as the non-linearities go), we can treat the calculated value as a prediction, and apply a correction step. Sensible approach would be a variation of Heun's method take the new state (and x[n+0.5]; you need one step lookahead) and recalculate the "transconductances" then redo the linear solver, then average the resulting state with the original prediction (and likewise for outputs). Since the error of the "correction" step should be opposite to the error of the "prediction" step, they should mostly cancel. As far as I can tell, this is sufficient to make it a true second-order method (don't feel like doing formal error analysis, sorry).
In practice the correction step roughly doubles the CPU use. In cases where the prediction works well (and most of the time it does), it's probably better idea to double the oversampling instead, but if you hit an obviously degenerate case, then the above could solve the issue.