Cheap non-linear zero-delay filters

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

mystran wrote: it doesn't really matter how accurately we recycle garbage (not in DSP anyway).
:lol:

This reminds me a bit of Dominique Wurtz aka Karrikuh Diode Ladder filter, which was in part inspired by mystan's comments (also quite cheap - only 1 nonlinearity though) - though he hints at solving the non-linear feedback too in the thread.

http://www.kvraudio.com/forum/viewtopic ... 95#3794095

Andrew

Post

Ichad.c wrote:
mystran wrote: it doesn't really matter how accurately we recycle garbage (not in DSP anyway).
:lol:

This reminds me a bit of Dominique Wurtz aka Karrikuh Diode Ladder filter, which was in part inspired by mystan's comments (also quite cheap - only 1 nonlinearity though) - though he hints at solving the non-linear feedback too in the thread.

http://www.kvraudio.com/forum/viewtopic ... 95#3794095
Dolphin has used a simple solver for the full five non-linearities since the beginning. Essentially it's using a predictor-corrector based on Euler simultaneously for all the equations. That was the best I could think of at the time (I tried quite a few weird things), and sorta-kinda works, but not quite as well as I'd like.

When (a few days ago) I applied the method I'm describing here, and the results are so much better it's not even funny. Compared to the already expensive method in Dolphin, I could even save on CPU (if I wanted to) and still improve the sound. I'll certainly be replacing the old filter from Dolphin as soon as I can sort out a model with all the auxiliary circuitry that Dolphin attempts to model.

However, if you do go transforming a diode ladder, don't get scared if the linear solution part isn't exactly quite as pretty (or simple) as with transistor ladders. Being unbuffered, every output will depend on every state, and the solutions don't really factor very nicely at all, so the cost of the linear solver is certainly higher there.

For messy systems like this, I suggest the following stragey for finding an evaluation order: solve the group of equations and take which ever output has the shortest solution. Then remove the equation you solved, redo for the remaining outputs (with the already solved value now available) and pick the shortest one again, until the remaining solutions are simple enough that it doesn't matter anymore.

Post

mystran wrote:
edit: I like your approach though.. Pade approximations are generally what I always try first whenever I need to approximate something nasty. ;)
I'm way into approximating stuff, just a shame that there isn't enough free programs that can approximate functions - all I know is Maxima and lolremez. Since I'm a C++ noob(2months) I've been steadily building a personal approximation library - I'm going off topic.

Inanyway, Pade approximations are actually the best IMHO in these situations. Because the error becomes less the closer it comes to zero because Pade is based on Taylor approximmations and in the "centred around zero" case, it's considered in math as a unique case and is often call a Maclaurin series. In other words better tracking for low frequencies, so the more you oversample - the more accurate it becomes. Though the nr2 approximation is pretty dang good @48kHz - without oversampling.

Andrew

Post

Ichad.c wrote: I'm way into approximating stuff, just a shame that there isn't enough free programs that can approximate functions - all I know is Maxima and lolremez. Since I'm a C++ noob(2months) I've been steadily building a personal approximation library - I'm going off topic.
Maxima is what I use for almost all my (algebraic) math. It has it's limits (eg you can sometimes get some solutions out of Wolfram Alpha that Maxima doesn't know about) but most of the time I've been able to convince it to give me what I want. For numerical solutions you could use Octave (though learning that thing is still on my list of things to do, because usually I want the algorithm in C++ anyway so I'll have to go the hard route anyway).

Post

Ichad.c wrote:Inanyway, Pade approximations are actually the best IMHO in these situations. Because the error becomes less the closer it comes to zero
You can get the same kind of behavior by approximating f(x)/x instead of f(x) with minimax approximation. However, the problem with minimax approximations is that, although producing least possible maximum error, they also produce ripples. So for low-order approximations this might be not the best idea. For the approximations where the error becomes negligible this is probably the best option. Having said that, it's not too easy to build a minimax rational approximation for tanh, because of convergence problems.

Post

On a non-cheap note and since I'm fascinated by non-linearities in general:

Since 1 diode is exp(), all these technical papers simplify to tanh(). But in reality - ain't those diodes slightly mismathed? Which would lead to assymetry. I saw a couple of FFT analyser pics of filters in self oscillation - and if I remember correctly - all of them had some even harmonics - though that isn't proof in itself - 'cause there could be multiple factors in the circuits that could cause it.

So, maybe as an educated guess, do you think the diodes could lead to even harmonics?

Regards
Andrew

Post

Btw, if someone couldn't figure out how to go from the original post to the transistor ladder code I posted, here's a walk through that should clarify the process (process is pretty similar whatever the topology):

1. write down the differential equations for the system to be transformed (I'm not going to redo the derivation for transistor ladder this time; we'll just take the following for granted):

dV0/dt = iCtl * (tanh(in - r*V3) - tanh(V0))
dV1/dt = iCtl * (tanh(V0) - tanh(V1))
dV2/dt = iCtl * (tanh(V1) - tanh(V2))
dV3/dt = iCtl * (tanh(V2) - tanh(V3))

2. Replace each with BLT integrator and pull out the non-linearities

y0 = s0 + f * (t0 * (in - r*y3) - t1*y0))
y1 = s1 + f * (t1 * y0 - t2 * y1)
y2 = s2 + f * (t2 * y1 - t3 * y2)
y3 = s3 + f * (t3 * y2 - t4 * y3)

3. Solve each one separately to pull out the denominator factors; this step is optional and for filters like SVF that are based on integrators directly this doesn't even apply, but usually this simplifies things

y0 = (s0 + f * t0 * (in - r*y3)) / (1 + t1*f)
y1 = (s1 + f * t1 * y0) / (1 + t2*f)
y2 = (s2 + f * t2 * y1) / (1 + t3*f)
y3 = (s3 + f * t3 * y2) / (1 + t4*f)

Then substitute g0 = 1/(1 + t1*f) and so on, because they will appear as common subexpressions in the next step (and it's easier to pull them out in advance).

4. Fire up your favourite Computer Algebra System, and solve the whole thing. For Maxima as an example:

Code: Select all

  declare([s0,s1,s3,s4,in], mainvar);
  solve([
     y0 = (s0 + f * t0 * (in - r*y3)) * g0,
     y1 = (s1 + f * t1 * y0) * g1,
     y2 = (s2 + f * t2 * y1) * g2,
     y3 = (s3 + f * t3 * y2) * g3
    ], [y0, y1, y2, y3]);
Take one of the solutions (I'm using y3) and write it down in code. In most cases it doesn't really matter which one you pick, so whichever looks the least evil is usually a good candidate, as long as it appears in the right-hand side of at least one other equation, so we can hope to simplify the remaining set of equations. Naturally you can pull out more than one if they are easy to calculate. Whatever seems to result in the smallest amount of total work.

Since you probably want to combine common sub-expressions as much as possible, this tends to be the part where you need to do some manual work, and it's also the most error prone part. If you're sure the differential equations from step 1 are fine and the result doesn't work, or blows up, or gives obvious garbage, then check and double check that you didn't make any mistakes here. If everything else fails, try the solution in the exact form that you get from your CAS; often that might still work and you know that you made some mistake.

Even if it does appear to work, I'd still check and double check everything (all steps really) really carefully, because slight mistakes with the "t" factors might not be very obvious in "mostly linear" behavior (ie the filter just distorts wrong), and mistakes with the "g" factors only become more obvious at higher cutoffs. I usually copy the refactored C++ solution back into Maxima, substract the original solution "as-is" (eg copy-paste) and feed it all to radcan() which is Maxima's simplication function and should print zero. If it doesn't, then the two formulas are not equivalent, and you have a mistake (unfortunately not even this method is fool-proof, so don't get mad if you find a mistake in my code ;)).

Now depending on the filter, you could then remove the solved output from the set of equations above, and solve the remaining ones again (with one of the outputs now available as constants). This should result in shorter solutions for the remaining equations. Repeat as many times as seems necessary. In the case of many "properly buffered" filters (like our transistor ladder here) that's not really necessary, since once we know one of the outputs, we can use the formulas from step 3 directly for the rest... but that's what you'd do for the diode-ladder or similar.

Post

Ichad.c wrote: Since 1 diode is exp(), all these technical papers simplify to tanh(). But in reality - ain't those diodes slightly mismathed? Which would lead to assymetry. I saw a couple of FFT analyser pics of filters in self oscillation - and if I remember correctly - all of them had some even harmonics - though that isn't proof in itself - 'cause there could be multiple factors in the circuits that could cause it.
Yup, that's right. Another thing left out from the example for simplification reasons and because I don't want people to copy my flavor of imperfections exactly. ;)

What you get for mismatched transistors (or diodes) depends on how they are mismatched, and this can vary with temperature (since they probably have slightly different temperature drift and so on), but the most obvious result tends to be some CV feed-through (eg the control current doesn't cancel out completely). If you model this for ladders, you will typically get somewhat more noisy and/or rough sound (more so for diode ladder) but remember that if you do this you almost certainly should also model any DC blockers in the feedback loops because the two tend to interact in curious ways (and you might get DC problems too).

Post

Hey mystran, thanks so much for sharing this! Your technique looks like it provides a very good sound quality / cpu usage ratio.

So I wrote some code for a 4-pole OTA cascade based on your difference equation and using your approach (EDIT: corrected non-linear gain calc):

Code: Select all

inline double tick(const double x, const double fc)
{
// range [0..1] => 0 HZ .. Nyquist
assert(fc > 0 && fc < 1);
const double wc = PI_HALF * fc; // with 2x oversampling, frequency dewarping is not required
// const double wcb = 2 * tan(0.5*wc); // dewarping

// evaluate the non-linear gains
const double kk = k*tanhx_div_x(s[3]);
const double t1 = tanhx_div_x(x - kk*s[3] - s[0]);
const double t2 = tanhx_div_x(s[0] - s[1]);
const double t3 = tanhx_div_x(s[1] - s[2]);
const double t4 = tanhx_div_x(s[2] - s[3]);

// Linearize around operating point given by current filter state
const double a1 = wc*t1;
const double a2 = wc*t2;
const double a3 = wc*t3;
const double a4 = wc*t4;
const double b1 = 1 / (1 + a1), b2 = 1 / (1 + a2);
const double b3 = 1 / (1 + a3), b4 = 1 / (1 + a4);

// solve feedback
double ss = b1*s[0];
ss = b2*(a2*ss + s[1]);
ss = b3*(a3*ss + s[2]);
ss = b4*(a4*ss + s[3]);
const double g = a1*b1 * a2*b2 * a3*b3 * a4*b4;
const double y4 = (g*x + ss) / (1 + g*kk);

// update filter state
const double y0 = x - kk*y4;
const double y1 = b1*(a1*y0 + s[0]);
s[0] += 2*a1*(y0 - y1);
const double y2 = b2*(a2*y1 + s[1]);
s[1] += 2*a2*(y1 - y2);
const double y3 = b3*(a3*y2 + s[2]);
s[2] += 2*a3*(y2 - y3);
s[3] += 2*a4*(y3 - y4);

return y4;
}
The model includes a clipper in the feedback part because it would blow up otherwise (I am wondering why this is necessary since we already have clippers in each OTA stage, which should bound their output values, shouldn't they?). It works well with 2x oversampling and k <= 4.3 (EDIT: after correction, code works also with higher k). For larger k and very high cutoff, it can still blow up, but a maximum k=4.3 is perfectly fine. Also, I found that in practice, the 1/2 sample delay of the input does not yield an audible change in sound/behavior so I omitted it.

So, mystran (or anybody else), could you verify if this roughly modelling the OTA cascade correctly? I'm asking because the code sometimes produces strange sounds (not necessarily bad!) when the filter is driven strongly and resonance frequency is close to some harmonic of the input signal (disharmonic sound like from ringmod or FM). Is this behavior found in analog circuits? I could guess this could be this kind of "self-modulation" you were talking about in other threads. Increasing oversampling by another factor of 2 doesn't change this behavior.
Last edited by karrikuh on Fri May 18, 2012 10:21 pm, edited 1 time in total.

Post

mystran wrote:
Yup, that's right. Another thing left out from the example for simplification reasons and because I don't want people to copy my flavor of imperfections exactly. ;)
Imperfections is what makes things beautiful in my opinion, I tend to think about the imperfections part of anything before anything else. The part that intrigued me most in your code was the "tanh(sqrt(x))/sqrt(x)" part.

I've always had this simple theory - the human brain is naturally wired for pattern recognition - say you stand on a black & white tiled floor (in a checkers) pattern - you'll see either white on black or black on white, your brain is always analyzing. Now if you slightly mess with the checkers pattern on the floor(make it random, add colour etc), your brain can't "see" a pattern - so it concentrates less on it, seeing it as natural or "not a threat". The black and white checkered pattern floor thing is quite strange - you'll often see this behaviour in kids -> only walk on white/black.

I think the most important part in audio, be it analog or digital equipment or even song-writing is to slightly -> break the pattern...

Regards
Andrew

Post

karrikuh wrote: The model includes a clipper in the feedback part because it would blow up otherwise (I am wondering why this is necessary since we already have clippers in each OTA stage, which should bound their output values, shouldn't they?).
No, they don't. The non-linearities act on the input to the integrator (see below).

In practice you would normally have another OTA (for resonance CV) and/or a feedback limiter in the feedback path, eg you can take the feedback through another tanh() or something. You might additionally want to bias that such that it clips somewhat earlier than the filter core, and the level of this bias is actually quite important as far as sound goes (and I have ABSOLUTELY NO IDEA what the correct bias would be for any particular filter; either measure yourself, ask someone else, or just try something until it sounds nice).
So, mystran (or anybody else), could you verify if this roughly modelling the OTA cascade correctly?
If I'm not mistaken (my electronics is not that strong), we should have roughly (assuming unity gain from each stage, I think that's wrong for CEM3320 style filters since they specify 91k for input and 100k for feedback which implies some voltage gain.. then again tanh() is probably bullshit for those gain-cells too, since they claim most distortion is second order.. but whatever):

Code: Select all

 dV0/dt = f * tanh( in - r * limit(V3) - v0)
 dV1/dt = f * tanh( v0 - v1 )
 dV2/dt = f * tanh( v1 - v2 )
 dV3/dt = f * tanh( v2 - v3 )
Notice that each OTA feeds a (loss-less) integrator (=capacitor + buffer), which is fed back to the OTA input to get the "lossy" low-pass response. Hence the input to the saturation is the difference (Vin-Vout) which doesn't limit amplitude at all; it's just a slew-rate limiter really. In practice the amplitude is ofcourse limited, because at some point the OTA will hit it's supply-rails which is ignored by our model; there's no way a practical OTA can supply current that requires the voltage to exceed the supply, so the maximum capacitor voltage can't get any higher either.. but since you probably wouldn't want to zener the inputs of your OTA either (I think that's what happens if you go past the specified limits on input levels) you will need to have a feedback limiter anyway, and we might(!) be able to ignore the whole supply-rail issue.

In any case, I don't think you got it quite right, since you're ignoring the (local) feedback when calculating the non-linearities. Rest of it looks reasonable. :) [EDIT: oh wait; I'm not quite sure what's going on.. are you using just the feedback for the non-linearities? That's not a good idea, that's going to blow up!]
Last edited by mystran on Fri May 18, 2012 9:59 pm, edited 3 times in total.

Post

Ichad.c wrote:
mystran wrote: Yup, that's right. Another thing left out from the example for simplification reasons and because I don't want people to copy my flavor of imperfections exactly. ;)
Imperfections is what makes things beautiful in my opinion, I tend to think about the imperfections part of anything before anything else.
Yes, in my opinion imperfections is where real "circuit modelling" starts, but it's orthogonal to the discussion of the technique used. :)
The part that intrigued me most in your code was the "tanh(sqrt(x))/sqrt(x)" part.
But it's perfectly logical. The line of thinking (or how I arrive there) is roughly this: to link a stereo compressor you force the gain reduction to be the same for both channels. To make it independent of equal-power panning, you use the vector magnitude. Since saturation is just a form of compression, it makes sense that you can "stereo link" any saturation too. I originally did it because I hate the "distortion localizes in speakers" effect of dual-mono processing.

However, the sqrt(x) is expensive, so you might want to approximate, which raise the question: why approximate something that you're just going to feed into another approximation? So it's logical to approximate whole thing as a single step. Once I'd done that, I realized that (1) I had now eliminated any branches (or absolute values) required by many direct tanh() approximations, since the input (being a square) is always positive and (2) it's actually fairly good approximation (in terms of CPU/accuracy) even for "mono" inputs, unless you have to handle huge feedback gains.

The above line of thinking wasn't something that happened overnight. The total time-line probably looks more like 2 or 3 years. :)
I think the most important part in audio, be it analog or digital equipment or even song-writing is to slightly -> break the pattern...
I agree. :)

Post

Thanks mystran so far for the prompt reply.
mystran wrote:

Code: Select all

dV0/dt = f * tanh( in - r * limit(V3) - v0)
 dV1/dt = f * tanh( v0 - v1 )
 dV2/dt = f * tanh( v1 - v2 )
 dV3/dt = f * tanh( v2 - v3 )
Yes, these are exactly the differential equations I had in mind!
mystran wrote: In any case, I don't think you got it quite right, since you're ignoring the (local) feedback when calculating the non-linearities. Rest of it looks reasonable. :)
Ah, thanks, I edited the code in my above post, so this is hopefully correct now? Now it doesn't blow up with very high k anymore!
Still, I'm wondering why this filter sounds rather dirty and produces these strange artifacts not present in the ladder's output?

Post

Regarding CEM3320 datasheet 91k vs 100k: if my limited EE understanding doesn't fail me, and we assume the stages are true low-pass stages, then one would expect voltage gain of 100k/91k ~ 1.1 per stage which results in total gain of 1.458 or so, eg (still assuming tanh()):

Code: Select all

 dV0/dt = f * tanh( 1.1 * in - r * limit(V3) - v0) 
 dV1/dt = f * tanh( 1.1 * v0 - v1 ) 
 dV2/dt = f * tanh( 1.1 * v1 - v2 ) 
 dV3/dt = f * tanh( 1.1 * v2 - v3 ) 
The resonance VCA is still a question mark. If it's safe to assume it's a simple OTA and the input resistor to ground (3.5k) is also what the other gain cells have (or had if they were OTAs), then we'd have roughly a gain of 1.96 ~ 2 from 51k resistor from audio out. Maybe one could calculate that from the values given in the datasheet but I'm too tired to figure out an obvious way right now. Anyway, the assumption would give:

Code: Select all


 r' = r / (1.458 * 1.96) ~ r / ( 2.86 ) 

 dV0/dt = f * tanh( 1.1 * in - r' * tanh(1.96 * V3) - v0) 
 dV1/dt = f * tanh( 1.1 * v0 - v1 ) 
 dV2/dt = f * tanh( 1.1 * v1 - v2 ) 
 dV3/dt = f * tanh( 1.1 * v2 - v3 )


That sounds quite reasonable actually (with resonance clipping before the stages start going foobar). Unfortunately I have no such filter to measure against..

edit: sound sample for the above (if I didn't make any mistakes) http://www.signaldust.com/files/cascade.mp3
(mp3 but high bitrate.. oh and 44.1kHz host rate with x4 oversampling)

I think that's not too bad, whether or not it models anything :D
Last edited by mystran on Fri May 18, 2012 11:09 pm, edited 4 times in total.

Post

karrikuh wrote: Still, I'm wondering why this filter sounds rather dirty and produces these strange artifacts not present in the ladder's output?
I don't know. Try some synth that has such a filter and you might observe similar results. :P

PS. if you don't observe similar results, then our model is inaccurate. ;)

Post Reply

Return to “DSP and Plugin Development”