KVR :: DSP and Plug-in Development » Cheap non-linear zero-delay filters [View Original Topic]
There are 232 posts in this topic. Page: 1 2 3

mystran - Thu May 17, 2012 10:14 am
Ok, so I didn't want to pollute that other thread, so I'm starting a new one.

In that other thread I mentioned you can linearize a filter around it's known state and get reasonable results for not much overhead. Here's some discussion of the topic. [edit: also, while I said "zero-delay" in the thread title, I'm really doing "topology-preservation" too; you don't need to, if you don't want to, but it kinda helps if you want to mimic analog non-linearities]

I'm not going to concern myself with BLT other than note that since this is based on trapezoidal integration, and since I'm using TDF2 BLT integrators as building blocks, we can tune filters exactly in cases where the non-linearities are negligible (which is to say that I'm not going to discuss what happens when the non-linearities force the cutoff to change and we're in a seriously warped region of the BLT frequency scale). Namely, the form of integrator building block I'm using is:

y[n+1] = s[n] + f*x[n]
s[n+1] = s[n] + 2*f*x[n]

where
x[n], y[n] and s[n] are input, output and state

and the tuning that maps the normalized analog frequency 1
to whatever frequency we want is
f = tan(M_PI * freq / samplerate)

Few useful observations: if we think of y[n] and s[n] as continuous piece-wise linear functions y(n) and s(n), then y(n)=s(n-0.5). So the output is essentially a half-sample delayed version of the state. We're going to abuse this.

To keep things simple and the math short, I'm going to use a one-pole OTA lowpass (straight from the LM13700 datasheet) as and example and ignore all the practical details like signal levels (eg we'd really need need a ton of 1/vT factors) and auxiliary circuitry. So we have the following model for the OTA (the derivation or accuracy of which is not important) and capacitor:
iOta = iCtl * tanh(vIn)
dVcap/dt = 1/C * iOta

And with feedback (since we want lowpass and not an integrator):
dVcap/dt = iCtl/C * tanh( vIn - vCap )

So if we discretize, we get:
y[n+1] = s[n] + f * tanh( x[n] - y[n+1] )

This looks like it needs a feedback solver, but here's the thing: Let's rewrite the OTA in terms of transconductance gM = iOut/vIn and we get:

gM(vIn) = iCtl * ( tanh( vIn ) / vIn )
or:
iOut = iCtl * T(vIn) * vIn
where:
T(x) = tanh(x) / x

[footnote: tanh(x)/x=1 in the limit x -> 0, but it's easy to adapt most tanh(x) approximations to return tanh(x)/x instead and avoid this issue]
[footnote: you can do this for practically any type of memoryless non-linearity, even when "transconductance" as such wouldn't be meaningful]

y[n+1] = s[n] + f * T(x[n] - y[n+1]) * (x[n] - y[n+1])

The point of this exercise is that we can now treat the non-linear transconductance and the actual input separately. So what we can do, is combine Euler method for the non-linearity with trapezoidal method for the linear part! In alternative interpretation we delay the transconductance by half a sample. Recall that s[n] = y[n+0.5]. For consistency, use x[n-0.5]=0.5*(x[n]+x[n-1]) for the actual input signal; everything else is available from one of the filter states:
y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])

Now the feedback dependence is linear, so we can implement this as::
t = T(0.5*(x[n] + x[n-1]) - s[n])
y[n+1] = (s[n] + f*t*x[n]) / (1 + f*t)
s[n+1] = s[n] + 2*f*t*x[n]

Note that technically it this only reasonable when the signal changes slowly. That's true at audible frequencies if we oversample to avoid aliasing. With higher gain or nastier non-linearities it deviates more, but so does aliasing increase, and once you oversample you increase both again. In practice the above works remarkably well for almost anything I've thrown at it so far (transistor ladders, diode ladders, OTA cascades... you name it).

On paper it's less accurate than fitting a linear curve directly to the tangent of the tanh() for slowly changing signals, since we are fitting a linear curve from the origin to the known operating point, but unlike the tangent fitting method, this tolerates violations of the "signal changes slowly" assumption much better; we might feed a bit too much or two little current, but most of the time the results are relatively sane (which cannot be said about tangent fitting, which can run into crazy paradoxes). You can certainly use this directly and in most cases with a bit of oversampling (eg 4 times or so usually work sensibly for reasonable input levels) it sounds quite fine (and when I say "quite fine" I mean "certainly a lot better than a traditional sequential fudge-factored filter").

Anyway, if you're NOT happy with the results (remember we're only first order as far as the non-linearities go), we can treat the calculated value as a prediction, and apply a correction step. Sensible approach would be a variation of Heun's method take the new state (and x[n+0.5]; you need one step lookahead) and recalculate the "transconductances" then redo the linear solver, then average the resulting state with the original prediction (and likewise for outputs). Since the error of the "correction" step should be opposite to the error of the "prediction" step, they should mostly cancel. As far as I can tell, this is sufficient to make it a true second-order method (don't feel like doing formal error analysis, sorry).

In practice the correction step roughly doubles the CPU use. In cases where the prediction works well (and most of the time it does), it's probably better idea to double the oversampling instead, but if you hit an obviously degenerate case, then the above could solve the issue.
mystran - Thu May 17, 2012 10:18 am
For the TL;DR folks, here's transistor ladder using the above (without correction) to play with.. note that this assume straight and linear feedback loop which is not really a very good model in practice

//
// You can use this however you like for pretty much any purpose,
// as long as you don't claim you wrote it. There is no warranty.
//
// Distribution of substantial portions of this code in source form
// must include this copyright notice and list of conditions.
//

// input delay and state for member variables
double z1;
double s[4] = { 0, 0, 0, 0 };

// tanh(x)/x approximation, flatline at very high inputs
// so might not be safe for very large feedback gains
// [limit is 1/15 so very large means ~15 or +23dB]
double tanhXdX(double x)
{
double a = x*x;
// IIRC I got this as Pade-approx for tanh(sqrt(x))/sqrt(x)
return ((a + 105)*a + 945) / ((15*a + 420)*a + 945);
}

// cutoff as normalized frequency (eg 0.5 = Nyquist)
// resonance from 0 to 1, self-oscillates at settings over 0.9
double cutoff, double resonance,
double * in, double * out, unsigned nsamples)
{
// tuning and feedback
double f = tan(M_PI * cutoff);
double r = (40.0/9.0) * resonance;

for(unsigned n = 0; n < nsamples; ++n)
{
// input with half delay, for non-linearities
double ih = 0.5 * (in[n] + zi); zi = in[n];

// evaluate the non-linear gains
double t0 = tanhXdX(ih - r * s[3]);
double t1 = tanhXdX(s[0]);
double t2 = tanhXdX(s[1]);
double t3 = tanhXdX(s[2]);
double t4 = tanhXdX(s[3]);

// g# the denominators for solutions of individual stages
double g0 = 1 / (1 + f*t1), g1 = 1 / (1 + f*t2);
double g2 = 1 / (1 + f*t3), g3 = 1 / (1 + f*t4);

// f# are just factored out of the feedback solution
double f3 = f*t3*g3, f2 = f*t2*g2*f3, f1 = f*t1*g1*f2, f0 = f*t0*g0*f1;

// solve feedback
double y3 = (g3*s[3] + f3*g2*s[2] + f2*g1*s[1] + f1*g0*s[0] + f0*in[n]) / (1 + r*f0);

// then solve the remaining outputs (with the non-linear gains here)
double xx = t0*(in[n] - r*y3);
double y0 = t1*g0*(s[0] + f*xx);
double y1 = t2*g1*(s[1] + f*y0);
double y2 = t3*g2*(s[2] + f*y1);

// update state
s[0] += 2*f * (xx - y0);
s[1] += 2*f * (y0 - y1);
s[2] += 2*f * (y1 - y2);
s[3] += 2*f * (y2 - t4*y3);

out[n] = y3;
}
}

mystran - Thu May 17, 2012 10:47 am
Oh and AQ: if you use this you own me a Poly-Ana.
mystran - Thu May 17, 2012 11:56 am

• if you want to filter stereo signals, you can treat the signals and states as vectors, and evaluate "transconductances" with the Pythagorean norm sqrt(L*L+R*R) which ofcourse is specifically why I was approximating tanh(sqrt(x))/sqrt(x) in the first place... this isn't really circuit modelling anymore (good luck with analog version) but unlike a dual-mono implementation, distortion tends to localize at the input signals rather than the individual speakers.. I personally like the effect and I've used it for traditional filters before I even touched any zero-delay stuff (eg Sweep uses it for example; my dev-builds for new version already use a better "OTA SVF" model with the same approach to stereo)

• if you still feel you need to iterate (which personally think is quite futile if you apply the correction step, since trapezoidal is only second-order anyway), then my educated guess is that iteration on the transconductance is likely to converge faster than iteration on the signal itself... but like I said, I don't see the point without a better integrator

• note that without the correction step, the cost here is almost exactly the cost of solving a linear zero-delay filter + the cost of evaluating the non-linearities once.

mr.bungle - Thu May 17, 2012 12:36 pm
I tried the filter in my polysynth. Hot damn. I think it's the best sounding thing in a VST so far.

Here's a build: http://dl.dropbox.com/u/13976481/MauSynth_BETA_RC1.dll
Ichad.c - Thu May 17, 2012 12:53 pm
Thanks for all the detailed explenations! The way in which you make it sound so easy though - is kinda scary

My one brain-cell is learning to do simple SSE and the other one is taking a nap

Your 4L2 plug is pretty dang useful as well.

Regards
Andrew
camsr - Thu May 17, 2012 11:45 pm
Wow this filter tunes spot on. I was able to use full resonance while tracking the fundamental frequency with the MauSynth and it stayed sharp from the bass to the treble. Nice work guys
Z1202 - Fri May 18, 2012 12:11 am
mystran wrote:
On paper it's less accurate than fitting a linear curve directly to the tangent of the tanh() for slowly changing signals, since we are fitting a linear curve from the origin to the known operating point, but unlike the tangent fitting method, this tolerates violations of the "signal changes slowly" assumption much better; we might feed a bit too much or two little current, but most of the time the results are relatively sane (which cannot be said about tangent fitting, which can run into crazy paradoxes).
Nice! (well, I didn't verify or try this myself, but from what you're saying seems really nice).

Regards,
{Z}
mystran - Fri May 18, 2012 1:18 am

edit: also added a remark in the original post about "topology-preserving" since that's what it is.. when I wrote the thing originally, I didn't realize anyone would want to do "zero-delay" without doing topology-preserving, but apparently that's also possible
Big Tick - Fri May 18, 2012 3:50 am
Thanks ! There are a few x[x] which should be x[n].

Also, something doesn't make sense to me: if x[n-0.5]=0.5*(x[n]+x[n-1])and
y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
then shouldn't you have
t = T(0.5*(x[n] + x[n-1]) - s[n] )
t = T(s[n] + 0.5*(x[n] + x[n-1])
mystran - Fri May 18, 2012 3:56 am
Big Tick wrote:
Thanks ! There are a few x[x] which should be x[n].

Also, something doesn't make sense to me: if x[n-0.5]=0.5*(x[n]+x[n-1])and
y[n+1] = s[n] + f * T(x[n-0.5] - s[n]) * (x[n] - y[n+1])
then shouldn't you have
t = T(0.5*(x[n] + x[n-1]) - s[n] )
t = T(s[n] + 0.5*(x[n] + x[n-1])

Yes the sign of s[n] was obviously wrong. Corrected in OP, thanks for noticing.

Regading x[x] I found one and correct it.
Big Tick - Fri May 18, 2012 4:04 am
Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?
Ichad.c - Fri May 18, 2012 5:58 am
Big Tick wrote:
Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?

Good Q -> sounds sneaky, but wouldn't that be dependant on your oversampling technique?

Random add-on: Since there is the word "cheap" in the heading, you could use 1 of these 2 approximations for tan(x):

Quote:
A = -15*x+x^3;
B = 3*(-5+2*x^2);
tan_out = A/B;

A = 5*(-21*x+2*x^3);
B = 105-45*x^2+x^4;
tan_out = A/B;

I generally use nr2 - works well - even without oversampling. Haven't checked it out with mystran filter above though. nr1 might track equally as well if you oversample.

Just my 2cents.

Regards
Andrew
mystran - Fri May 18, 2012 6:01 am
Big Tick wrote:
Can you 2x oversample it, and then get the actual half-sample delay from the previous (oversmpled) sample ? Or is this useless ?

You mean like 2x oversample input, instead of taking average?

I've not tried it but it sounds rather useless. In the worst-case the high-freq attenuation from averaging might reduce aliasing slightly. In the best case (when you're oversampling enough that aliasing isn't an issue) the signal is smooth enough that you could forget the delay and just use the input sample (I've tried both ways and audible difference is generally speaking none).

Mostly I do it for theoretical consistency (ie looks good on paper). It's certainly not a critical component of the method, so I wouldn't waste too many cycles on it (you can always find something else to improve instead).

Oh and remember that aliasing in a non-linear recursive filter isn't really just an audible problem: once you start getting significant aliasing, your filter state essentially becomes bogus, and you enter a garbage-in-garbage-out feedback loop. In my book it doesn't really matter how accurately we recycle garbage (not in DSP anyway).
mystran - Fri May 18, 2012 6:20 am

Random add-on: Since there is the word "cheap" in the heading, you could use 1 of these 2 approximations for tan(x):

Quote:
A = -15*x+x^3;
B = 3*(-5+2*x^2);
tan_out = A/B;

A = 5*(-21*x+2*x^3);
B = 105-45*x^2+x^4;
tan_out = A/B;

I used tan(x) and moved it out of the per-sample loop in the example just for simplicity. That's not really what I use in practice; in fact MSVC CRT tan() doesn't even work right when I use truncation as rounding mode.

On top of that, you probably have an exponentiation inside the tangent, so what you really want is tan(pow(2,octaves+log2_basetune)) or something sufficiently similar. If you run this once per sample (and I certainly do), you almost certainly want to optimize the whole thing in one way or another, but that's kinda different topic.

edit: I like your approach though.. Pade approximations are generally what I always try first whenever I need to approximate something nasty.

edit2: I can't resist commenting on the "different topic" just enough to mention that in general the accuracy requirements for tuning should be optimized in terms of octaves (ie on the logarithmic scale) because the perception of pitch is such that 10 cents deviation at 100Hz sounds roughly as bad as 10 cents deviation at 2kHz and the absolute errors are quite different.
Ichad.c - Fri May 18, 2012 6:30 am
mystran wrote:
it doesn't really matter how accurately we recycle garbage (not in DSP anyway).

This reminds me a bit of Dominique Wurtz aka Karrikuh Diode Ladder filter, which was in part inspired by mystan's comments (also quite cheap - only 1 nonlinearity though) - though he hints at solving the non-linear feedback too in the thread.

http://www.kvraudio.com/forum/viewtopic.php?p=3794095#3794095

Andrew
mystran - Fri May 18, 2012 9:35 am
mystran wrote:
it doesn't really matter how accurately we recycle garbage (not in DSP anyway).

This reminds me a bit of Dominique Wurtz aka Karrikuh Diode Ladder filter, which was in part inspired by mystan's comments (also quite cheap - only 1 nonlinearity though) - though he hints at solving the non-linear feedback too in the thread.

http://www.kvraudio.com/forum/viewtopic.php?p=3794095#3794095

Dolphin has used a simple solver for the full five non-linearities since the beginning. Essentially it's using a predictor-corrector based on Euler simultaneously for all the equations. That was the best I could think of at the time (I tried quite a few weird things), and sorta-kinda works, but not quite as well as I'd like.

When (a few days ago) I applied the method I'm describing here, and the results are so much better it's not even funny. Compared to the already expensive method in Dolphin, I could even save on CPU (if I wanted to) and still improve the sound. I'll certainly be replacing the old filter from Dolphin as soon as I can sort out a model with all the auxiliary circuitry that Dolphin attempts to model.

However, if you do go transforming a diode ladder, don't get scared if the linear solution part isn't exactly quite as pretty (or simple) as with transistor ladders. Being unbuffered, every output will depend on every state, and the solutions don't really factor very nicely at all, so the cost of the linear solver is certainly higher there.

For messy systems like this, I suggest the following stragey for finding an evaluation order: solve the group of equations and take which ever output has the shortest solution. Then remove the equation you solved, redo for the remaining outputs (with the already solved value now available) and pick the shortest one again, until the remaining solutions are simple enough that it doesn't matter anymore.
Ichad.c - Fri May 18, 2012 10:29 am
mystran wrote:

edit: I like your approach though.. Pade approximations are generally what I always try first whenever I need to approximate something nasty.

I'm way into approximating stuff, just a shame that there isn't enough free programs that can approximate functions - all I know is Maxima and lolremez. Since I'm a C++ noob(2months) I've been steadily building a personal approximation library - I'm going off topic.

Inanyway, Pade approximations are actually the best IMHO in these situations. Because the error becomes less the closer it comes to zero because Pade is based on Taylor approximmations and in the "centred around zero" case, it's considered in math as a unique case and is often call a Maclaurin series. In other words better tracking for low frequencies, so the more you oversample - the more accurate it becomes. Though the nr2 approximation is pretty dang good @48kHz - without oversampling.

Andrew
mystran - Fri May 18, 2012 10:55 am

I'm way into approximating stuff, just a shame that there isn't enough free programs that can approximate functions - all I know is Maxima and lolremez. Since I'm a C++ noob(2months) I've been steadily building a personal approximation library - I'm going off topic.

Maxima is what I use for almost all my (algebraic) math. It has it's limits (eg you can sometimes get some solutions out of Wolfram Alpha that Maxima doesn't know about) but most of the time I've been able to convince it to give me what I want. For numerical solutions you could use Octave (though learning that thing is still on my list of things to do, because usually I want the algorithm in C++ anyway so I'll have to go the hard route anyway).
Z1202 - Fri May 18, 2012 11:14 am
Inanyway, Pade approximations are actually the best IMHO in these situations. Because the error becomes less the closer it comes to zero
You can get the same kind of behavior by approximating f(x)/x instead of f(x) with minimax approximation. However, the problem with minimax approximations is that, although producing least possible maximum error, they also produce ripples. So for low-order approximations this might be not the best idea. For the approximations where the error becomes negligible this is probably the best option. Having said that, it's not too easy to build a minimax rational approximation for tanh, because of convergence problems.
Ichad.c - Fri May 18, 2012 11:23 am
On a non-cheap note and since I'm fascinated by non-linearities in general:

Since 1 diode is exp(), all these technical papers simplify to tanh(). But in reality - ain't those diodes slightly mismathed? Which would lead to assymetry. I saw a couple of FFT analyser pics of filters in self oscillation - and if I remember correctly - all of them had some even harmonics - though that isn't proof in itself - 'cause there could be multiple factors in the circuits that could cause it.

So, maybe as an educated guess, do you think the diodes could lead to even harmonics?

Regards
Andrew
mystran - Fri May 18, 2012 11:47 am
Btw, if someone couldn't figure out how to go from the original post to the transistor ladder code I posted, here's a walk through that should clarify the process (process is pretty similar whatever the topology):

1. write down the differential equations for the system to be transformed (I'm not going to redo the derivation for transistor ladder this time; we'll just take the following for granted):

dV0/dt = iCtl * (tanh(in - r*V3) - tanh(V0))
dV1/dt = iCtl * (tanh(V0) - tanh(V1))
dV2/dt = iCtl * (tanh(V1) - tanh(V2))
dV3/dt = iCtl * (tanh(V2) - tanh(V3))

2. Replace each with BLT integrator and pull out the non-linearities

y0 = s0 + f * (t0 * (in - r*y3) - t1*y0))
y1 = s1 + f * (t1 * y0 - t2 * y1)
y2 = s2 + f * (t2 * y1 - t3 * y2)
y3 = s3 + f * (t3 * y2 - t4 * y3)

3. Solve each one separately to pull out the denominator factors; this step is optional and for filters like SVF that are based on integrators directly this doesn't even apply, but usually this simplifies things

y0 = (s0 + f * t0 * (in - r*y3)) / (1 + t1*f)
y1 = (s1 + f * t1 * y0) / (1 + t2*f)
y2 = (s2 + f * t2 * y1) / (1 + t3*f)
y3 = (s3 + f * t3 * y2) / (1 + t4*f)

Then substitute g0 = 1/(1 + t1*f) and so on, because they will appear as common subexpressions in the next step (and it's easier to pull them out in advance).

4. Fire up your favourite Computer Algebra System, and solve the whole thing. For Maxima as an example:

declare([s0,s1,s3,s4,in], mainvar);
solve([
y0 = (s0 + f * t0 * (in - r*y3)) * g0,
y1 = (s1 + f * t1 * y0) * g1,
y2 = (s2 + f * t2 * y1) * g2,
y3 = (s3 + f * t3 * y2) * g3
], [y0, y1, y2, y3]);

Take one of the solutions (I'm using y3) and write it down in code. In most cases it doesn't really matter which one you pick, so whichever looks the least evil is usually a good candidate, as long as it appears in the right-hand side of at least one other equation, so we can hope to simplify the remaining set of equations. Naturally you can pull out more than one if they are easy to calculate. Whatever seems to result in the smallest amount of total work.

Since you probably want to combine common sub-expressions as much as possible, this tends to be the part where you need to do some manual work, and it's also the most error prone part. If you're sure the differential equations from step 1 are fine and the result doesn't work, or blows up, or gives obvious garbage, then check and double check that you didn't make any mistakes here. If everything else fails, try the solution in the exact form that you get from your CAS; often that might still work and you know that you made some mistake.

Even if it does appear to work, I'd still check and double check everything (all steps really) really carefully, because slight mistakes with the "t" factors might not be very obvious in "mostly linear" behavior (ie the filter just distorts wrong), and mistakes with the "g" factors only become more obvious at higher cutoffs. I usually copy the refactored C++ solution back into Maxima, substract the original solution "as-is" (eg copy-paste) and feed it all to radcan() which is Maxima's simplication function and should print zero. If it doesn't, then the two formulas are not equivalent, and you have a mistake (unfortunately not even this method is fool-proof, so don't get mad if you find a mistake in my code ).

Now depending on the filter, you could then remove the solved output from the set of equations above, and solve the remaining ones again (with one of the outputs now available as constants). This should result in shorter solutions for the remaining equations. Repeat as many times as seems necessary. In the case of many "properly buffered" filters (like our transistor ladder here) that's not really necessary, since once we know one of the outputs, we can use the formulas from step 3 directly for the rest... but that's what you'd do for the diode-ladder or similar.
mystran - Fri May 18, 2012 12:02 pm

Since 1 diode is exp(), all these technical papers simplify to tanh(). But in reality - ain't those diodes slightly mismathed? Which would lead to assymetry. I saw a couple of FFT analyser pics of filters in self oscillation - and if I remember correctly - all of them had some even harmonics - though that isn't proof in itself - 'cause there could be multiple factors in the circuits that could cause it.

Yup, that's right. Another thing left out from the example for simplification reasons and because I don't want people to copy my flavor of imperfections exactly.

What you get for mismatched transistors (or diodes) depends on how they are mismatched, and this can vary with temperature (since they probably have slightly different temperature drift and so on), but the most obvious result tends to be some CV feed-through (eg the control current doesn't cancel out completely). If you model this for ladders, you will typically get somewhat more noisy and/or rough sound (more so for diode ladder) but remember that if you do this you almost certainly should also model any DC blockers in the feedback loops because the two tend to interact in curious ways (and you might get DC problems too).
karrikuh - Fri May 18, 2012 12:19 pm
Hey mystran, thanks so much for sharing this! Your technique looks like it provides a very good sound quality / cpu usage ratio.

So I wrote some code for a 4-pole OTA cascade based on your difference equation and using your approach (EDIT: corrected non-linear gain calc):
inline double tick(const double x, const double fc)
{
// range [0..1] => 0 HZ .. Nyquist
assert(fc > 0 && fc < 1);
const double wc = PI_HALF * fc; // with 2x oversampling, frequency dewarping is not required
// const double wcb = 2 * tan(0.5*wc); // dewarping

// evaluate the non-linear gains
const double kk = k*tanhx_div_x(s[3]);
const double t1 = tanhx_div_x(x - kk*s[3] - s[0]);
const double t2 = tanhx_div_x(s[0] - s[1]);
const double t3 = tanhx_div_x(s[1] - s[2]);
const double t4 = tanhx_div_x(s[2] - s[3]);

// Linearize around operating point given by current filter state
const double a1 = wc*t1;
const double a2 = wc*t2;
const double a3 = wc*t3;
const double a4 = wc*t4;
const double b1 = 1 / (1 + a1), b2 = 1 / (1 + a2);
const double b3 = 1 / (1 + a3), b4 = 1 / (1 + a4);

// solve feedback
double ss = b1*s[0];
ss = b2*(a2*ss + s[1]);
ss = b3*(a3*ss + s[2]);
ss = b4*(a4*ss + s[3]);
const double g = a1*b1 * a2*b2 * a3*b3 * a4*b4;
const double y4 = (g*x + ss) / (1 + g*kk);

// update filter state
const double y0 = x - kk*y4;
const double y1 = b1*(a1*y0 + s[0]);
s[0] += 2*a1*(y0 - y1);
const double y2 = b2*(a2*y1 + s[1]);
s[1] += 2*a2*(y1 - y2);
const double y3 = b3*(a3*y2 + s[2]);
s[2] += 2*a3*(y2 - y3);
s[3] += 2*a4*(y3 - y4);

return y4;
}

The model includes a clipper in the feedback part because it would blow up otherwise (I am wondering why this is necessary since we already have clippers in each OTA stage, which should bound their output values, shouldn't they?). It works well with 2x oversampling and k <= 4.3 (EDIT: after correction, code works also with higher k). For larger k and very high cutoff, it can still blow up, but a maximum k=4.3 is perfectly fine. Also, I found that in practice, the 1/2 sample delay of the input does not yield an audible change in sound/behavior so I omitted it.

So, mystran (or anybody else), could you verify if this roughly modelling the OTA cascade correctly? I'm asking because the code sometimes produces strange sounds (not necessarily bad!) when the filter is driven strongly and resonance frequency is close to some harmonic of the input signal (disharmonic sound like from ringmod or FM). Is this behavior found in analog circuits? I could guess this could be this kind of "self-modulation" you were talking about in other threads. Increasing oversampling by another factor of 2 doesn't change this behavior.
Ichad.c - Fri May 18, 2012 12:54 pm
mystran wrote:

Yup, that's right. Another thing left out from the example for simplification reasons and because I don't want people to copy my flavor of imperfections exactly.

Imperfections is what makes things beautiful in my opinion, I tend to think about the imperfections part of anything before anything else. The part that intrigued me most in your code was the "tanh(sqrt(x))/sqrt(x)" part.

I've always had this simple theory - the human brain is naturally wired for pattern recognition - say you stand on a black & white tiled floor (in a checkers) pattern - you'll see either white on black or black on white, your brain is always analyzing. Now if you slightly mess with the checkers pattern on the floor(make it random, add colour etc), your brain can't "see" a pattern - so it concentrates less on it, seeing it as natural or "not a threat". The black and white checkered pattern floor thing is quite strange - you'll often see this behaviour in kids -> only walk on white/black.

I think the most important part in audio, be it analog or digital equipment or even song-writing is to slightly -> break the pattern...

Regards
Andrew
mystran - Fri May 18, 2012 1:20 pm
karrikuh wrote:

The model includes a clipper in the feedback part because it would blow up otherwise (I am wondering why this is necessary since we already have clippers in each OTA stage, which should bound their output values, shouldn't they?).

No, they don't. The non-linearities act on the input to the integrator (see below).

In practice you would normally have another OTA (for resonance CV) and/or a feedback limiter in the feedback path, eg you can take the feedback through another tanh() or something. You might additionally want to bias that such that it clips somewhat earlier than the filter core, and the level of this bias is actually quite important as far as sound goes (and I have ABSOLUTELY NO IDEA what the correct bias would be for any particular filter; either measure yourself, ask someone else, or just try something until it sounds nice).

Quote:

So, mystran (or anybody else), could you verify if this roughly modelling the OTA cascade correctly?

If I'm not mistaken (my electronics is not that strong), we should have roughly (assuming unity gain from each stage, I think that's wrong for CEM3320 style filters since they specify 91k for input and 100k for feedback which implies some voltage gain.. then again tanh() is probably bullshit for those gain-cells too, since they claim most distortion is second order.. but whatever):

dV0/dt = f * tanh( in - r * limit(V3) - v0)
dV1/dt = f * tanh( v0 - v1 )
dV2/dt = f * tanh( v1 - v2 )
dV3/dt = f * tanh( v2 - v3 )

Notice that each OTA feeds a (loss-less) integrator (=capacitor + buffer), which is fed back to the OTA input to get the "lossy" low-pass response. Hence the input to the saturation is the difference (Vin-Vout) which doesn't limit amplitude at all; it's just a slew-rate limiter really. In practice the amplitude is ofcourse limited, because at some point the OTA will hit it's supply-rails which is ignored by our model; there's no way a practical OTA can supply current that requires the voltage to exceed the supply, so the maximum capacitor voltage can't get any higher either.. but since you probably wouldn't want to zener the inputs of your OTA either (I think that's what happens if you go past the specified limits on input levels) you will need to have a feedback limiter anyway, and we might(!) be able to ignore the whole supply-rail issue.

In any case, I don't think you got it quite right, since you're ignoring the (local) feedback when calculating the non-linearities. Rest of it looks reasonable. [EDIT: oh wait; I'm not quite sure what's going on.. are you using just the feedback for the non-linearities? That's not a good idea, that's going to blow up!]
mystran - Fri May 18, 2012 1:32 pm
mystran wrote:

Yup, that's right. Another thing left out from the example for simplification reasons and because I don't want people to copy my flavor of imperfections exactly.

Imperfections is what makes things beautiful in my opinion, I tend to think about the imperfections part of anything before anything else.

Yes, in my opinion imperfections is where real "circuit modelling" starts, but it's orthogonal to the discussion of the technique used.

Quote:

The part that intrigued me most in your code was the "tanh(sqrt(x))/sqrt(x)" part.

But it's perfectly logical. The line of thinking (or how I arrive there) is roughly this: to link a stereo compressor you force the gain reduction to be the same for both channels. To make it independent of equal-power panning, you use the vector magnitude. Since saturation is just a form of compression, it makes sense that you can "stereo link" any saturation too. I originally did it because I hate the "distortion localizes in speakers" effect of dual-mono processing.

However, the sqrt(x) is expensive, so you might want to approximate, which raise the question: why approximate something that you're just going to feed into another approximation? So it's logical to approximate whole thing as a single step. Once I'd done that, I realized that (1) I had now eliminated any branches (or absolute values) required by many direct tanh() approximations, since the input (being a square) is always positive and (2) it's actually fairly good approximation (in terms of CPU/accuracy) even for "mono" inputs, unless you have to handle huge feedback gains.

The above line of thinking wasn't something that happened overnight. The total time-line probably looks more like 2 or 3 years.

Quote:

I think the most important part in audio, be it analog or digital equipment or even song-writing is to slightly -> break the pattern...

I agree.
karrikuh - Fri May 18, 2012 2:27 pm
Thanks mystran so far for the prompt reply.
mystran wrote:
dV0/dt = f * tanh( in - r * limit(V3) - v0)
dV1/dt = f * tanh( v0 - v1 )
dV2/dt = f * tanh( v1 - v2 )
dV3/dt = f * tanh( v2 - v3 )

Yes, these are exactly the differential equations I had in mind!
mystran wrote:

In any case, I don't think you got it quite right, since you're ignoring the (local) feedback when calculating the non-linearities. Rest of it looks reasonable.

Ah, thanks, I edited the code in my above post, so this is hopefully correct now? Now it doesn't blow up with very high k anymore!
Still, I'm wondering why this filter sounds rather dirty and produces these strange artifacts not present in the ladder's output?
mystran - Fri May 18, 2012 2:40 pm
Regarding CEM3320 datasheet 91k vs 100k: if my limited EE understanding doesn't fail me, and we assume the stages are true low-pass stages, then one would expect voltage gain of 100k/91k ~ 1.1 per stage which results in total gain of 1.458 or so, eg (still assuming tanh()):

dV0/dt = f * tanh( 1.1 * in - r * limit(V3) - v0)
dV1/dt = f * tanh( 1.1 * v0 - v1 )
dV2/dt = f * tanh( 1.1 * v1 - v2 )
dV3/dt = f * tanh( 1.1 * v2 - v3 )

The resonance VCA is still a question mark. If it's safe to assume it's a simple OTA and the input resistor to ground (3.5k) is also what the other gain cells have (or had if they were OTAs), then we'd have roughly a gain of 1.96 ~ 2 from 51k resistor from audio out. Maybe one could calculate that from the values given in the datasheet but I'm too tired to figure out an obvious way right now. Anyway, the assumption would give:

r' = r / (1.458 * 1.96) ~ r / ( 2.86 )

dV0/dt = f * tanh( 1.1 * in - r' * tanh(1.96 * V3) - v0)
dV1/dt = f * tanh( 1.1 * v0 - v1 )
dV2/dt = f * tanh( 1.1 * v1 - v2 )
dV3/dt = f * tanh( 1.1 * v2 - v3 )

That sounds quite reasonable actually (with resonance clipping before the stages start going foobar). Unfortunately I have no such filter to measure against..

edit: sound sample for the above (if I didn't make any mistakes) http://www.signaldust.com/files/cascade.mp3
(mp3 but high bitrate.. oh and 44.1kHz host rate with x4 oversampling)

I think that's not too bad, whether or not it models anything
mystran - Fri May 18, 2012 2:41 pm
karrikuh wrote:

Still, I'm wondering why this filter sounds rather dirty and produces these strange artifacts not present in the ladder's output?

I don't know. Try some synth that has such a filter and you might observe similar results.

PS. if you don't observe similar results, then our model is inaccurate.
Ichad.c - Sat May 19, 2012 11:16 am
mystran wrote:
then again tanh() is probably bullshit for those gain-cells too, since they claim most distortion is second order..

Hey Karrikuh, the above is somethig to look into, the "dirty" part you describe might just be too much distortion, since you should remember that tanh(x) has a "boosting" effect - and if you are boosting succesively - the boosts are driving higher into the next non-linearity range(hope that makes sense). I think that people that used OTAs where actually trying to get a cleaner sound and less noise(in theory they should be less noisy), so I don't think they are "driven" that hard. Here are some interisting diagrams which might be of some use as well:

http://vellocet.com/dsp/analog/SelfOscHarmonics.html

Notice all them even harmonics

Also if you haven't read (which I doubt) Timothy E. Stinchcombe's excellent Study of the MS10 & MS20 Filters - make a cup of coffee and get cracking! My head still hurts every time I read and re-read it

Regards
Andrew
Ichad.c - Sat May 19, 2012 11:21 am
Hey mystran, a random Q out of curiosity:

Would you think your approach/methods for Zero-Delay filters could be applied to the Sallen-Key topology, and dare I say - the scary Leapfrog SV.

Regards
Andrew
Ichad.c - Sat May 19, 2012 12:26 pm
Oh and another (rather long) thread to look at:

http://music.columbia.edu/pipermail/music-dsp/2011-May/069900.html

Andrew
karrikuh - Sat May 19, 2012 1:02 pm
mystran wrote:
then again tanh() is probably bullshit for those gain-cells too, since they claim most distortion is second order..

Hey Karrikuh, the above is somethig to look into, the "dirty" part you describe might just be too much distortion, since you should remember that tanh(x) has a "boosting" effect - and if you are boosting succesively - the boosts are driving higher into the next non-linearity range(hope that makes sense). I think that people that used OTAs where actually trying to get a cleaner sound and less noise(in theory they should be less noisy), so I don't think they are "driven" that hard.

Yeah, I also suspect I'm just driving the filter unreallistically hard. So after all could it be that the non-linearities within each OTA stage can actually be neglected in practice and the character of the filter is essentially determined by the feedback clipper? At least this is what I figured when comparing a simplified cascade (linear 1-pole stages, only one clipper in feedback path) to the OTA cascade emulation of Diva (unfortunately I do not yet own the real thing): I couldn't really hear a difference...
mystran - Sat May 19, 2012 1:17 pm
I had to buy a new mouse (the cord started failing on the old one). Due to a temporary brain-malfunction I ended up buying a Tetra too; they really REALLY should make it illegal to sell synths and computer parts in the same stores.

So with my new (rather expensive) mouse, I might be able to do more educated guesses at some point.
mystran - Sat May 19, 2012 1:20 pm

Would you think your approach/methods for Zero-Delay filters could be applied to the Sallen-Key topology, and dare I say - the scary Leapfrog SV.

If you can write down the differential equations (which should be straight-forward for linear models) then you can certainly apply the stuff to whatever you want. The question is how well it works.

Somehow (it might have something to do with the new mouse) I've also come to a possession of a Monotron (those are practically free, so couldn't resist) so I might try and see if I can mimic that filter at some point.
mystran - Sat May 19, 2012 1:34 pm
karrikuh wrote:

Yeah, I also suspect I'm just driving the filter unreallistically hard. So after all could it be that the non-linearities within each OTA stage can actually be neglected in practice and the character of the filter is essentially determined by the feedback clipper?

My take on this would be: Please forget this idea immediately.
jupiter8 - Sun May 20, 2012 12:20 am
Mystrans's Cheap non-linear zero-delay filter,soon in every synth.
Brilliant,thanks.
Richard_Synapse - Sun May 20, 2012 12:31 am
From what I can tell from a real OTA ladder filter, the nonlinearity is not tanh. For the diode clipper it's most likely the wrong model to choose as well. Of course it may sound perfectly fine, just saying unlikely to be 100% authentic if that's what you're after.

Richard
karrikuh - Sun May 20, 2012 1:25 am
jupiter8 wrote:
Mystrans's Cheap non-linear zero-delay filter,soon in every synth. Very Happy
Brilliant,thanks.

I totally saw this coming. Of course it's perfectly fine as long as devs don't start marketing it as something exclusive to their product.

Richard_Synapse wrote:
From what I can tell from a real OTA ladder filter, the nonlinearity is not tanh. For the diode clipper it's most likely the wrong model to choose as well. Of course it may sound perfectly fine, just saying unlikely to be 100% authentic if that's what you're after.

Richard

I'm more of the school of "if it sounds good, it's good", so I don't really care if the nonlinearity is perfectly reproduced. From my experience, it's more the placement of the clipper within the topology rather than its particular shape that determines the overall character.
mystran - Sun May 20, 2012 5:11 am
Richard_Synapse wrote:
From what I can tell from a real OTA ladder filter, the nonlinearity is not tanh. For the diode clipper it's most likely the wrong model to choose as well. Of course it may sound perfectly fine, just saying unlikely to be 100% authentic if that's what you're after.

Like I tried to point out (perhaps not explicitly enough) it depends on the OTA.

For something like CA3080 (or even LM13700 as long as you leave the diode linearization unconnected) tanh() is quite reasonably model, since the whole this is just another long-tailed pair plus a few current mirrors. So if you built an OTA ladder with 4x CA3080 for the stages and another for the resonance control, then you should get roughtly tanh() all the way.

Ofcourse, most OTA cascades in the wild are not built out of CA3080s, but something like CEM3320 so the question of "what are the correct non-linearities" then becomes "how are the CEM3320 gaincells implemented".
mystran - Sun May 20, 2012 5:11 am
jupiter8 wrote:
Mystrans's Cheap non-linear zero-delay filter,soon in every synth.

Well, hopefully not. Hopefully "mystran's cheap non-linear zero-delay filters" in every synth that doesn't have anything better already.
izonin - Sun May 20, 2012 6:11 am
Is this one based on Vadim's 0df filter, or is it a new concept?
mystran - Sun May 20, 2012 6:32 am
Regarding CEM3320:

If someone has one lying around, you could take one of the stages, replace the capacitor with a resistor (to convert current to voltage, rather than integrate it), then feed it a saw-tooth (with some sane control current and rest of the chip connected to some safe bypass-configuration) and measure voltage over the resistor (with a scope). That would give you a linear (assuming your saw-tooth is linear) to memory-less non-linear curve for gain-cell. This relies on neglecting the buffer input current (or at least assumes it's linear) but that's probably perfectly fine.

I'm not an EE so (1) I'm intentionally making the above a bit vague, because you should know what you are doing and (2) please don't blame me if you get magic smoke. If the chip is already part of a filter, you'll need to disconnect it first (this might involve desoldering if it's not in a socket).

Similarly you could then measure from the stage-output (eg what the buffer gives) to get an idea of whether the buffers themselves are non-linear (in the previous discussion we've been assuming the buffers are linear enough to neglect).

For measuring the feedback amplifier, you could either measure after first stage and apply an inverse mapping to remove the effect of the first stage-non-linearity or you could try to measure from pin 1 directly.

Also SSM2044 is something totally else (ie based on the datasheet I wouldn't call it OTA cascade really); for all I know it might even behave more like a transistor ladder.
mystran - Sun May 20, 2012 6:35 am
mystran wrote:

r' = r / (1.458 * 1.96) ~ r / ( 2.86 )

dV0/dt = f * tanh( 1.1 * in - r' * tanh(1.96 * V3) - v0)
dV1/dt = f * tanh( 1.1 * v0 - v1 )
dV2/dt = f * tanh( 1.1 * v1 - v2 )
dV3/dt = f * tanh( 1.1 * v2 - v3 )

[...]

edit: sound sample for the above (if I didn't make any mistakes) http://www.signaldust.com/files/cascade.mp3
(mp3 but high bitrate.. oh and 44.1kHz host rate with x4 oversampling)

Since I edited the sound sample into the post afterwards and we transitioned to the next page so someone might no have noticed it (and nobody commented), does any of you like that sound? (and apologies for crappy playing)

Karrikuh specifically: are you getting similar results to the above?
mystran - Sun May 20, 2012 6:54 am
mystran wrote:
jupiter8 wrote:
Mystrans's Cheap non-linear zero-delay filter,soon in every synth.

Well, hopefully not. Hopefully "mystran's cheap non-linear zero-delay filters" in every synth that doesn't have anything better already.

More seriously though: even if everyone started using something like this, I doubt it'll lead to "every filter sounds the same" problem any more than the existing practice. You can already hear tons of simple synths use the same few old models, yet quality synths still manage to have their own sound.

For what it's worth, I have no intention of using the ladder from the second post as such. It's a simplified models, with details stripped that (1) I think are important for the transistor ladder sound but (2) contribute nothing to the discussion here.
mystran - Sun May 20, 2012 7:56 am
izonin wrote:
Is this one based on Vadim's 0df filter, or is it a new concept?

I think the "credit" chain goes something like:

Vadim wrote the paper (is this the correct one to reference), which actually does pretty much the same as I'm doing; all I'm doing really is throwing away any iterations. I think this showed everyone that there's something important in here.

Neotec started an insightful thread about this stuff last year. I'd like to credit him from the insightful use of TDF2 integrator which turns out to be a wonderful choice. In the thread we discuss shaping of state variables, and if rewrite the state updates the way I do, this gives another simple method (in fact I just started from improving that method).

Also Urs (or rather U-he) released Diva, which made everyone (including me) realize that we really should forget about the old fudge-factor models. Certainly should also credit Andrew for the Drop too, even if that's not available yet (some of the samples he has posted certainly sound a lot better than anything I've done). Without these two guys I'd probably never got interested in the whole thing (since I had some reasonably well-working "traditional" filters).

Now, all I did was figure out a simple simplification (somewhat by accident). This was actually only supposed to be a "prediction" step for an iterative solver. Only after I tried to iteratively improve the solution (without getting much audible benefit out of it) I started analyzing what was going on and realized I'd found a prediction that (at least for the simple cases which we've discussed so far) was good enough that you might use it directly.

Basically: Credit to Vadim, Urs, Andy, neotec (what's his real name?) and everyone else that has contributed to the various discussions of zero-delay filters in the past few years. My contribution is rather small; I'm standing on the shoulders of giants.
izonin - Sun May 20, 2012 8:11 am
mystran wrote:
izonin wrote:
Is this one based on Vadim's 0df filter, or is it a new concept?

I think the "credit" chain goes something like:

Vadim wrote the paper (is this the correct one to reference), which actually does pretty much the same as I'm doing; all I'm doing really is throwing away any iterations. I think this showed everyone that there's something important in here.

Neotec started an insightful thread about this stuff last year. I'd like to credit him from the insightful use of TDF2 integrator which turns out to be a wonderful choice. In the thread we discuss shaping of state variables, and if rewrite the state updates the way I do, this gives another simple method (in fact I just started from improving that method).

Also Urs (or rather U-he) released Diva, which made everyone (including me) realize that we really should forget about the old fudge-factor models. Certainly should also credit Andrew for the Drop too, even if that's not available yet (some of the samples he has posted certainly sound a lot better than anything I've done). Without these two guys I'd probably never got interested in the whole thing (since I had some reasonably well-working "traditional" filters).

Now, all I did was figure out a simple simplification (somewhat by accident). This was actually only supposed to be a "prediction" step for an iterative solver. Only after I tried to iteratively improve the solution (without getting much audible benefit out of it) I started analyzing what was going on and realized I'd found a prediction that (at least for the simple cases which we've discussed so far) was good enough that you might use it directly.

Basically: Credit to Vadim, Urs, Andy, neotec (what's his real name?) and everyone else that has contributed to the various discussions of zero-delay filters in the past few years. My contribution is rather small; I'm standing on the shoulders of giants.

The reason I'm asking is because I have Vadim's 0df filter as a Reaktor ensemble, and since it's similar I could try and adapt it to this new model.
As far as crediting goes I think UAD's Moog Multimode also deserves to be mentioned, as, I believe, it inspired both Urs and Andy.
karrikuh - Sun May 20, 2012 11:13 am
mystran wrote:
mystran wrote:

r' = r / (1.458 * 1.96) ~ r / ( 2.86 )

dV0/dt = f * tanh( 1.1 * in - r' * tanh(1.96 * V3) - v0)
dV1/dt = f * tanh( 1.1 * v0 - v1 )
dV2/dt = f * tanh( 1.1 * v1 - v2 )
dV3/dt = f * tanh( 1.1 * v2 - v3 )

[...]

edit: sound sample for the above (if I didn't make any mistakes) http://www.signaldust.com/files/cascade.mp3
(mp3 but high bitrate.. oh and 44.1kHz host rate with x4 oversampling)

Since I edited the sound sample into the post afterwards and we transitioned to the next page so someone might no have noticed it (and nobody commented), does any of you like that sound? (and apologies for crappy playing)

I think it sounds quite nice although I can't tell how authentic it is.
mystran wrote:
Karrikuh specifically: are you getting similar results to the above?

Not sure, but maybe you can judge for yourself, I created a couple of test sounds:

It contains slow resonance sweep over a pure sawtooth wave with different pitches and filter input gains. The artifacts I mentioned earlier that were slightly irritating to me are found with the high pitched input (note C5). Here one notes the strong emphasis and distortion when the resonance sweeps over a harmonic of the input so that the overall sweep is not quite smooth.

Further more, there are some standard sounds, all with resonance set relatively high. There's also a comparison of of your full OTA implementation with 5 tanhs and a simplified one with only feedback clipping. Finally, I included a sawtooth sequence running through a 2-pole LPF based on your approach. I think the latter has a resonance characteristic extremely similar to Diva's Sallen-Key filter.

EDIT: everythin @ 2x oversampling
AdmiralQuality - Sun May 20, 2012 11:34 am
mystran wrote:
Oh and AQ: if you use this you own me a Poly-Ana.

I will seriously take a look at it, thanks! And you're on, deal!
Ichad.c - Sun May 20, 2012 11:57 am
I read somewhere that sallen-key filters use positive feedback, is that true?
mystran - Sun May 20, 2012 1:05 pm
Hmmh, maybe you're right that the difference between the full model and simplification is somewhat subtle. The reason is probably that for frequencies below the cutoff, the output (of a stage) tracks the input (of the same stage) fairly closely, so the difference will be small and the gain will be more or less linear. Intuitively that means the most obvious distortion (ie from higher frequencies) gets filtered out and some cutoff modulation is likely the most obvious observable effect (there is no cutoff modulation whatever if you simplify to single clipper).

As for the C5 note and artifacts: here's what my new toy does (if I drop the resonance below self-osc then it agrees better). With the oscillator disabled, it's clean except for faint control noise. So compared to that I'd rather say "model sounds too nice" though it's quite possible the vintage chips were smoother and/or the VCA is doing something evil, or the tanh() is not a good model for the gain cells and/or the feedback limit (I suspect this might be the case), or any number of other things. That said, I'm probably going to keep the model (I like it, and I'm more into nice sounding stuff than exact emulation), at least until I get my hands on a vintage chips (assuming that ever happens).

mystran - Sun May 20, 2012 1:20 pm
I read somewhere that sallen-key filters use positive feedback, is that true?

Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.
AdmiralQuality - Sun May 20, 2012 1:31 pm
mystran wrote:
I read somewhere that sallen-key filters use positive feedback, is that true?

Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.

Are you sure? Poly's is negative. As is my understanding of Moog and various other classic designs.
mystran - Sun May 20, 2012 1:43 pm
mystran wrote:
I read somewhere that sallen-key filters use positive feedback, is that true?

Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.

Are you sure? Poly's is negative. As is my understanding of Moog and various other classic designs.

I said "at least around the cutoff frequency" and that's the important part. For the ladder this happens to be negative at DC because cutoff is 180 degrees out of phase, so negative gain is specifically how you make it positive at cutoff.

But the reason I said what I said, is that I'm aware of these myths that say the "positive feedback" in a Sallen-Key is somehow responsible for the sound of a particular filter. It's not, it's the non-linearities of a particular design that matter, and you can't simplify it into a "positive" vs "negative"!

Nobody has complained about the fact that I've been writing down differential equations that are totally wrong as far as "positive" vs "negative" goes. All the OTA buffers and transistor ladder stages and whatever are actually inverting, while I've been writing them as if they were not. The reason I've been doing that is that it doesn't matter as long as you are consistent (ie if you have an asymmetric non-linearity, and you invert the signal going through that, then you have to invert the non-linearity too).
AdmiralQuality - Sun May 20, 2012 1:49 pm
mystran wrote:
mystran wrote:
I read somewhere that sallen-key filters use positive feedback, is that true?

Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.

Are you sure? Poly's is negative. As is my understanding of Moog and various other classic designs.

I said "at least around the cutoff frequency" and that's the important part. For the ladder this happens to be negative at DC because cutoff is 180 degrees out of phase, so negative gain is specifically how you make it positive at cutoff.

Gotcha.
Quote:

But the reason I said what I said, is that I'm aware of these myths that say the "positive feedback" in a Sallen-Key is somehow responsible for the sound of a particular filter. It's not, it's the non-linearities of a particular design that matter, and you can't simplify it into a "positive" vs "negative"!

Nobody has complained about the fact that I've been writing down differential equations that are totally wrong as far as "positive" vs "negative" goes. All the OTA buffers and transistor ladder stages and whatever are actually inverting, while I've been writing them as if they were not. The reason I've been doing that is that it doesn't matter as long as you are consistent (ie if you have an asymmetric non-linearity, and you invert the signal going through that, then you have to invert the non-linearity too).

Cool. I can't pretend to understand it all, but will let you know when I've had a chance to try it.
camsr - Sun May 20, 2012 2:09 pm
I have a question about delayless filters:
Is there some advantage to them as opposed to oversampling the same thing? Can oversampling work as well or does it have different problems? I could see the non-linearity working just as well, but what about the tuning and modulation?
xh3rv - Sun May 20, 2012 2:26 pm
camsr wrote:
I have a question about delayless filters:
Is there some advantage to them as opposed to oversampling the same thing? Can oversampling work as well or does it have different problems?

Not that I'm super confident in my understanding of things, but ... Eventually it boils down to the basic question of solving geometries. One can divide and conquer the problems by oversampling, but with some math-y transformation of the problem there's opportunity to divide in a more sophisticated way. Solutions that describe the infinitesimal interval of oversampling can be engineered to an accuracy that's good enough for the task of producing enriched audio signals. And, it's possible to do this quickly enough for real-time performance on a PC - perhaps costly in terms of CPU but - I'd say there's a reliable consensus that it's still overwhelmingly worth it to get a certain kind of sound.
antto - Sun May 20, 2012 3:07 pm
as far as i understand it..
if we consider the speed of current, calculate some okay-ish time period for the shortest analog feedback loop, and use such a high sampling rate, i guess there will not be a need for doing fancy stuff like zero delay
i'm talking about probably some uber-giga-hyper insane sampling rate

BUT there comes the problem with rounding and low cutoff frequencies..

so, you cannot just oversample like mad..
aciddose - Sun May 20, 2012 3:10 pm
could you replace the averaging "half sample delay" with lerp() so it's clear to anyone reading the code what is happening?

have you tried more advanced interpolations?

template<class T, class F>
INL T lerp(const T &A, const T &B, const F &X)
{
return A + (B - A) * X;
}

template<class T, class F>
INL T lerpcos(const T &A, const T &B, const F &X)
{
return A + (B - A) * (((F)1.0 - cos(X * (F)3.14159)) * (F)0.5);
}

template<class T, class F>
INL T lerpparabola(const T &A, const T &B, const F &X)
{
return A + (B - A) * (X*X * ((F)2.0 - X*X));
}

template<class T, class F>
INL T lerpsinc(const T &A, const T &B, const F &X)
{
return A + (B - A) * ((F)1.0 - lanczos<F>(X));
}

template<class T, class F>
INL T cubic(const T &A, const T &B, const T &C, const T &D, const F &X)
{
T E = B + D - C - A;
T F = A - B - E;
T G = C - A;
return B + E*X*X*X + F*X*X + G*X;
}

template<class T, class F>
INL T hermiteb(const T &A, const T &B, const T &C, const T &D, const F &X)
{
const T E = (B - C) * 1.5 + (D - A) * 0.5;
const T F = C*2.0 - B*2.5 + A - D*0.5;
const T G = (C - A) * 0.5;
return B + E*X*X*X + F*X*X + G*X;
}

template<class T, class F>
INL T intb(const T &A, const T &B, const T &C, const T &D, const F &f)
{
const T W = (C - B) * 3.0 + (A - D);
const T X = (B - C) * 7.5 + (D - A) * 2.5;
const T Y = (C + A) * 0.5 - B;
const T Z = (C - A) * 0.5;
return B + W*f*f*f*f*f + X*f*f*f*f + W*1.5*f*f*f + Y*f*f + Z*f;
}

//edit: forgot these...

template<class T>
INL T sinc(const T &X)
{
return sin(X) / (1E-10 + X);
}

template<class T>
INL T lanczos(const T &v)
{
return sinc<T>(v * (T)3.14159 / 2.0) * sinc<T>(v * (T)3.14159);
}

template<class T, class F>
INL T sinci(const T &A, const T &B, const T &C, const T &D, const F &X)
{
return (A)*lanczos(-1.0f - X) + (B)*lanczos(-X) + (C)*lanczos(1.0f-X) + (D)*lanczos(2.0f-X);
}

lerpcos is too expensive unless you use an approximation for cos. i haven't bothered to implement it myself yet but it's simple enough if anyone wanted to try.

lerp cos, parabolic and sinc have similar frequency response but different phase.
Urs - Sun May 20, 2012 3:14 pm
I read somewhere that sallen-key filters use positive feedback, is that true?

Kind of. But it goes into the other end of the first integrator of a two pole cascade...
aciddose - Sun May 20, 2012 3:20 pm
sallen-key is similar to this:

f_c = not sure, i'd have to check
fb_c = fb + fb / (1.0 - f_c);
a += (in - a + (a - saturate(b)) * fb_c) * f_c
b += (a - b) * f_c

that's pretty much exactly the structure, should work just fine using the method in this thread.

to get bandpass you insert the input where the saturate(b) is, so saturate(b) + input.

highpass you do the same only insert into the b integrator, so b += (a - b + b - input), i think. i'd have to test it. it's quite useless and sounds like crap with a naive implementation. it's also more expensive than a "state-variable" so i've never really bothered that much with it.
Richard_Synapse - Sun May 20, 2012 3:39 pm
camsr wrote:
I have a question about delayless filters:
Is there some advantage to them as opposed to oversampling the same thing? Can oversampling work as well or does it have different problems? I could see the non-linearity working just as well, but what about the tuning and modulation?

What do you mean with oversampling "the same thing"? If you mean oversampling a slightly more classic design, it's going to be slower than the code posted here, as you'll have twice the nonlinearities to solve plus the up- and downsampling. You can only gain CPU by using the same rate.

Richard
mystran - Sun May 20, 2012 4:16 pm
Richard_Synapse wrote:
camsr wrote:
I have a question about delayless filters:
Is there some advantage to them as opposed to oversampling the same thing? Can oversampling work as well or does it have different problems? I could see the non-linearity working just as well, but what about the tuning and modulation?

What do you mean with oversampling "the same thing"? If you mean oversampling a slightly more classic design, it's going to be slower than the code posted here, as you'll have twice the nonlinearities to solve plus the up- and downsampling. You can only gain CPU by using the same rate.

Yeah. What I find interesting is that when the "circuits" get more complex and have more "feedback" paths, the "zero-delay" solutions get somewhat more complex, but the oversampling and fine-tuning that you would need for a traditional filter just skyrockets.

Eg diode ladder is already complex enough that it's just cheaper to calculate "zero-delay" solution even if the solution is horrible and costs almost as much as evaluating a naive filter twice (with non-linearities and all).

In fact, I think "zero-delay" is misleading, because it's not so much about "eliminating delays" as it is about obsoleting the whole directional signal flow of traditional DSP: you just solve a system of equations and you can rely on instant signal propagation just like in analog (or rather unlike in analog where you're limited to a factor of the speed of light).

ps: no worries though, once we run out of everything else to model (and still have enough computer power to oversample until the naive solution works), we can always model the copper "transmission lines" on circuit boards; apparently at sufficiently high frequencies there's some interesting stuff going on there depending on the layout of the board
KBSoundSmith - Sun May 20, 2012 4:23 pm
mystran wrote:

ps: no worries though, once we run out of everything else to model, we can always model the copper "transmission lines" on circuit boards; apparently at sufficiently high frequencies there's some interesting stuff going on there depending on the layout of the board

Or start programming to do things only possible digitally?
mystran - Sun May 20, 2012 4:40 pm
KBSoundSmith wrote:
mystran wrote:

ps: no worries though, once we run out of everything else to model, we can always model the copper "transmission lines" on circuit boards; apparently at sufficiently high frequencies there's some interesting stuff going on there depending on the layout of the board

Or start programming to do things only possible digitally?

Haha, but I already suggested some of that in this thread, when I told I'm ruthlessly abandoning the analog concept of dual mono and treating stereo as vectors (too bad that only works with symmetric non-linearities).
camsr - Sun May 20, 2012 6:03 pm
Okay I took a look using my DAW and set up a sample based delay and compared delays to the bandwidth of interest. If our bandwidth of interest was 24khz (48khz sampling rate), 4x oversampling would be required to feedback a signal 1 time with a 1 sample delay between feedback. The passband had a -3dbFS rolloff at nyquist with a two sample delay. Any more than 1 sample would result in comb filtering of the passband.

It seems the solution is based on the amount of feedback required. If it is treated as a unit with respect to time, oversampling looks plausible for some applications, where solving analytically might be more expensive.
camsr - Sun May 20, 2012 6:29 pm
It appears using delayed feedback, the amount of oversampling required is the number of loops of the feedback times 4. I can't think of many uses for this, so I graciously step aside.
mystran - Sun May 20, 2012 7:04 pm
Now, what I would personally want to know, is how to do Accurate Discretization of Analog Audio Filters with Application to Parametric Equalizer Design efficiently with non-linearities.
karrikuh - Sun May 20, 2012 9:54 pm
aciddose wrote:
could you replace the averaging "half sample delay" with lerp() so it's clear to anyone reading the code what is happening?

have you tried more advanced interpolations?

I would say don't waste your time on this (well, you already did ... ). I found one can safely omit the 1/2 sample delay in practice without any impact on tuning/sound. The filter code for which I posted sound samples above (with 2x oversampling) doesn't have it.
karrikuh - Sun May 20, 2012 10:03 pm
I would be very interested if there are developers of guitar amp sims/distortion circuit emulations reading this thread who could try out if mystran's method can improve accuracy/efficiency of their code and share their experience.
aciddose - Sun May 20, 2012 10:11 pm
karrikuh wrote:
aciddose wrote:
could you replace the averaging "half sample delay" with lerp() so it's clear to anyone reading the code what is happening?

have you tried more advanced interpolations?

I would say don't waste your time on this (well, you already did ... ). I found one can safely omit the 1/2 sample delay in practice without any impact on tuning/sound. The filter code for which I posted sound samples above (with 2x oversampling) doesn't have it.

it should definitely make a difference. not a very big difference, but it should be audible in certain configurations. mystran already noted that aliasing may be reduced due to the filtering of the input. there are various ways to produce more accurate delays as well, although i don't think the phase accuracy should be very important.

what would be an improvement would be the sharper cut of more advanced interpolations.

it might actually not be a bad idea to convolve with an impulse designed specifically to shape the frequency response of the non-linear calculation. i haven't done any experimentation so i couldn't say what would work. such an impulse could be designed to control phase and frequency response with very high accuracy.
AdmiralQuality - Sun May 20, 2012 10:13 pm
karrikuh wrote:
I would be very interested if there are developers of guitar amp sims/distortion circuit emulations reading this thread who could try out if mystran's method can improve accuracy/efficiency of their code and share their experience.

There isn't a lot of feedback in those circuits. Except for the Presence control. Oh, hmmmmmmm!
earlevel - Sun May 20, 2012 10:54 pm
karrikuh wrote:
I would be very interested if there are developers of guitar amp sims/distortion circuit emulations reading this thread who could try out if mystran's method can improve accuracy/efficiency of their code and share their experience.

I'm not totally sure what you're getting at, perhaps...

There are a number of filters involved, of course, but recognize that most amps being emulated have passive, filter-order filters. And even when they're higher order, they're not a highly resonant. And they aren't moving much to speak of. And the core of the amp-sim/distortion is oversampled anyway...

In general, the filters are the easy part—there's some interaction (because the passive circuits interact)...

Did you have something that you were getting at that I'm missing? Were you thinking about the filters and tone controls, or mainly thinking of the oversampling nature of amp sims?
Z1202 - Sun May 20, 2012 11:42 pm
karrikuh wrote:
I would be very interested if there are developers of guitar amp sims/distortion circuit emulations reading this thread who could try out if mystran's method can improve accuracy/efficiency of their code and share their experience.
As far as I can guess, the SimulAnalog guys were using the trapezoidal integration back in the beginning of this century.
Richard_Synapse - Mon May 21, 2012 1:56 am
karrikuh wrote:
I would be very interested if there are developers of guitar amp sims/distortion circuit emulations reading this thread who could try out if mystran's method can improve accuracy/efficiency of their code and share their experience.

Haven't tried but I'd expect it to blow up in some cases. E.g. a diode clipper circuit as found in stomp boxes has a sinh(x) nonlinearity as opposed to tanh(x):

https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_clipode.pdf

Anyhow there's solutions for such problems that are efficient and stable, especially if circuit elements are constant- then you could use a state-space representation, for example.

Richard
cheppner - Mon May 21, 2012 3:02 am
mystran,

that optimization of yours is genius.

Am I misleading when I summarize it as "use linear zero delay feedback filter technology but add delayed nonlinearities"?

Cheers,
- Clemens
karrikuh - Mon May 21, 2012 4:08 am
aciddose wrote:
karrikuh wrote:
aciddose wrote:
could you replace the averaging "half sample delay" with lerp() so it's clear to anyone reading the code what is happening?

have you tried more advanced interpolations?

I would say don't waste your time on this (well, you already did ... ). I found one can safely omit the 1/2 sample delay in practice without any impact on tuning/sound. The filter code for which I posted sound samples above (with 2x oversampling) doesn't have it.

it should definitely make a difference. not a very big difference, but it should be audible in certain configurations. mystran already noted that aliasing may be reduced due to the filtering of the input.

Sure, the lowpass effect of crude linear interpolation for realizing the 1/2 sample delay will slightly reduce aliasing, but that's more of an artifact with positive side effects and not the original idea behind introducing the delay. If you replace the linear interpolation by an ideal bandlimited sinc-interpolator, then the lowpass effect is gone...

But really, just try it out yourself, mystran provided a nice piece of code you can mess with.
mystran - Mon May 21, 2012 6:30 am
cheppner wrote:
mystran,

that optimization of yours is genius.

Am I misleading when I summarize it as "use linear zero delay feedback filter technology but add delayed nonlinearities"?

Yeah, pretty much. Delay the non-linearities by half a sample, which happens to be just enough to make them only depend on what we already know.

Then replace them with resistors when calculating the linear solution. It was this second step where I started getting good results.
mystran - Mon May 21, 2012 7:00 am
Richard_Synapse wrote:
karrikuh wrote:
I would be very interested if there are developers of guitar amp sims/distortion circuit emulations reading this thread who could try out if mystran's method can improve accuracy/efficiency of their code and share their experience.

Haven't tried but I'd expect it to blow up in some cases. E.g. a diode clipper circuit as found in stomp boxes has a sinh(x) nonlinearity as opposed to tanh(x):

https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_clipode.pdf

Anyhow there's solutions for such problems that are efficient and stable, especially if circuit elements are constant- then you could use a state-space representation, for example.

Why would this blow up?

It's not like this method is limited to tanh() non-linearities. I first used the idea for the diode-limit of an SVF, and when I realized the error was well-defined, I generalized it to other things.

In the circuit above, the non-linearity isn't unbounded in the sense that would cause any problems here. At low inputs (and capacitor voltages) the diodes are open circuit, and it's a low-pass. At high inputs and/or capacitor voltages, the diodes become a "wire" and bypass the capacitor, at which point current is limited by the resistor (which is a stable circuit too).

The paper gives an approximation for the equivalent "resistance" for the diodes. That's what you want to use. Once you have a resistance value, you can calculate a linear solution using the method here. The problem is that diode current depends on resistor current which depends on diode current, which needs to be solved first. This is not something this method addresses.

The thing is, in circuits where there would truly be problems with run-away non-linearities, the practical analog prototype will typically do one of the following: (1) hit supply rails, which puts a bound on currents or (2) release magic smoke. Only in the case where a circuit depends critically on a non-linearity acting on high-frequency signals accurately would there be a problem, and while there probably exists some such circuit, it's rather questionable design in analog as well because you should plan for component tolerances (no practical diode is instant for example, even if at audio rates we can treat something like 1N4148 pretty much as such).
Ichad.c - Mon May 21, 2012 11:13 am
Richard_Synapse wrote:

Haven't tried but I'd expect it to blow up in some cases. E.g. a diode clipper circuit as found in stomp boxes has a sinh(x) nonlinearity as opposed to tanh(x):

https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_clipode.pdf

Richard

Hey Richard, the non-linearity does NOT have a sinh(x) shape. Read more carefully.
Ichad.c - Mon May 21, 2012 12:04 pm
mystran wrote:

Since you probably want to combine common sub-expressions as much as possible, this tends to be the part where you need to do some manual work, and it's also the most error prone part.

You should have written "manual work" in CAPITAL letters. Dang killing me

One thing that has bothered me since the thread started - how much difference does the non-linearities make to the sound 1x Distortion in forward-path and 1 in Feedback Path vs full non-linearities?

Should I try and take mystran's filter, and try and fudge a couple of "personalities" out of it? As a test to see - how much the non-linearities play a subjective role?

Andrew
karrikuh - Mon May 21, 2012 12:26 pm
mystran wrote:
The paper gives an approximation for the equivalent "resistance" for the diodes. That's what you want to use. Once you have a resistance value, you can calculate a linear solution using the method here. The problem is that diode current depends on resistor current which depends on diode current, which needs to be solved first. This is not something this method addresses.

I don't really follow. Equation (2) from the paper looks like it can be directly realized using your approach.
Richard_Synapse - Mon May 21, 2012 12:30 pm
Richard_Synapse wrote:

Haven't tried but I'd expect it to blow up in some cases. E.g. a diode clipper circuit as found in stomp boxes has a sinh(x) nonlinearity as opposed to tanh(x):

https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_clipode.pdf

Richard

Hey Richard, the non-linearity does have a sinh(x) shape. Read more carefully.

This is what I wrote above yes. Read more carefully?

Richard
Richard_Synapse - Mon May 21, 2012 12:53 pm
mystran wrote:
The paper gives an approximation for the equivalent "resistance" for the diodes. That's what you want to use. Once you have a resistance value, you can calculate a linear solution using the method here. The problem is that diode current depends on resistor current which depends on diode current, which needs to be solved first. This is not something this method addresses.

Thanks, I see what you mean now- read your first post again and got it. Really brilliant stuff!

Richard
Ichad.c - Mon May 21, 2012 1:01 pm
Richard_Synapse wrote:

This is what I wrote above yes. Read more carefully.

Richard

Sorry Richard, I should actually proof-read what I write more carefully! I edited my post.
Also did not imply any disrespect! The actual signal does not have a sinh(x) characteristic, the sinh(x) part has to do with the current. The signal shape would be simplified to tanh(x) - if I understand it correctly.

Regards
Andrew
Richard_Synapse - Mon May 21, 2012 1:17 pm
Sorry Richard, I should actually proof-read what I write more carefully! I edited my post.
Also did not imply any disrespect! The actual signal does not have a sinh(x) characteristic, the sinh(x) part has to do with the current. The signal shape would be simplified to tanh(x) - if I understand it correctly.

Yes, tanh is one possible static approximation. There's some more given in his papers, I think atan(x) was slightly closer to his reference curve.

Richard
Ichad.c - Mon May 21, 2012 3:07 pm
aciddose wrote:
sallen-key is similar to this:

f_c = not sure, i'd have to check
fb_c = fb + fb / (1.0 - f_c);
a += (in - a + (a - saturate(b)) * fb_c) * f_c
b += (a - b) * f_c

Thanks for the clarification aciddose, just wanted to know for educational purposes.

Regards
Andrew
aciddose - Mon May 21, 2012 4:07 pm
the difference you need to be aware of is that: (a - saturate(b)) * fb_c) is actually not correct. b is scaled first, then saturated, then differentiated by a. doing it this way is just an optimization.
Ichad.c - Tue May 22, 2012 3:03 am
Hey Mystran, I have a couple of ( ) questions:

1: Can a non-BLT transfer function be used with your method?

2: Say - if Maxima can solve y0,y1,y3 etc, is there a general rule as to - how to update the states?

The reason I ask is - I've been trying to convert aciddose's Sallen-Key topology to zero-delay; Maxima's solution seems correct:

// Sallen-Key Filter by aciddose
Omega = cutoff / samplerate;
r = res + res / (1.0 - fcut);
fcut2 = fcut*fcut;

//solve y1
y1 = -fcut * buf2 + (fcut2 * r - fcut2 + fcut) * buf1 + fcut2 * input / (1.0 + fcut2 * r);

//solve y0
y0 = fcut2 * r * y1 + (fcut * r - fcut + 1.0) * buf1 + fcut * input / (1.0 + fcut2 * r);

But I can't seem to get a LP response of of it - all I get is a low-level bandpass. Think I'm updating the states incorrectly.

Andrew
Z1202 - Tue May 22, 2012 5:13 am
Hey Mystran, I have a couple of ( ) questions:

1: Can a non-BLT transfer function be used with your method?
What would be the reason to use a non-BLT transfer function???
mystran - Tue May 22, 2012 5:14 am
Hey Mystran, I have a couple of ( ) questions:

1: Can a non-BLT transfer function be used with your method?

Well, anything with at most single zero per integrator should "work" but the point of BLT is to get something predictable.

Quote:

2: Say - if Maxima can solve y0,y1,y3 etc, is there a general rule as to - how to update the states?

It depends on the intergrator (both structure and transfer function). The way you calculate new states is quite irrelevant, as long as you get the correct results. The correct results depend on what you are trying to do.

Quote:

The reason I ask is - I've been trying to convert aciddose's Sallen-Key topology to zero-delay; Maxima's solution seems correct:

Urgh. I actually thought of complaining that he should post proper differential equations instead, but thought it'd be futile. In any case, you're best bet is to infer the analog model, and then discretize it properly.

Also, I'm sorry to say but I'm not going to debug anyone's solution code. I don't even debug my own solution code. I've come to the conclusion that the easiest way is to debug the model and redo the solutions (using whatever form of automation; copy-paste from Maxima works, but a spice-like netlist to C++ would be quite nice actually), then test them as-is until it works (at which point one can simplify the solve for CPU purposes). If the model needs a change, just redo the solution unless it's something trivial; chances of getting this right is much better once the models get more complex.
Ichad.c - Tue May 22, 2012 5:52 am
mystran wrote:
In any case, you're best bet is to infer the analog model, and then discretize it properly.

Thanks for the advice mystran. Guess I'll have to go Analog->BLT->Zero-Delay.
Maxima can't do BLT, so I'll google, thanks.

Andrew
mystran - Tue May 22, 2012 8:24 am
mystran wrote:
In any case, you're best bet is to infer the analog model, and then discretize it properly.

Thanks for the advice mystran. Guess I'll have to go Analog->BLT->Zero-Delay.
Maxima can't do BLT, so I'll google, thanks.

First of all: Maxima CAN do BLT in the sense that you can map transfer functions. All you need to do is substitute s=1/tan(w/2)*(z-1)/(z+1) and use radcan() to simplify. It works, I use it all the time.

However, that's NOT what you want here. Look at my example around page 2 or so: we want to simply replace every analog integrator (most commonly capacitor, though inductors are possible as well) with a digital equivalent. You then get a set of equations which depend on each other, and you solve the system. If you have trouble understanding the idea of differential equations, then the differentials (eg dV/dt) simply describe the (continuous) rate of change of the variables (ie what we want to integrate).

The BLT is built into the integrator we use for the TPT (topology-preserving transform; Vadim's term for the process). Every integration (of the differentials) becomes one discrete BLT integrator and we get the BLT of the filter as a by-product of the conversion using BLT integrators. You certainly don't need to do it explicitly.
aciddose - Tue May 22, 2012 1:07 pm
it isn't "my" version of a sallen-key. it's the simplest (least expensive) configuration that works similarly to sallen-key. as i mentioned it's actually not correct.

to do it correctly is actually more expensive than other options.

for example, what i posted is to a sallen-key as this is to a "ladder":

a += x(input - feedback*d - a)
b += x(a - b)
c += x(b - c)
d += x(c - d)

obviously this implementation doesn't actually work. it does give you an idea of the structure though.
Ichad.c - Tue May 22, 2012 2:03 pm
Now I'm even more confused. Think that I'm way too much of a noob (c++, math and otherwise), so I think I'll gracefully stay out of this thread from now on.

Thank you Mystran and Aciddose for all the help, code snippets and explenations.

Regards
Andrew
aciddose - Wed May 23, 2012 12:36 am
here is a version of the 4-pole that doesn't alias like mad:

float f = tan(3.1415926535897932384626433832795f * cutoff);
float fd2 = 2.0f * f;
float r = 4.0f * resonance;

float g = 1.0f / (1.0f + f);

float f3 = f*g * g;
float f2 = f*g * f*g * g;
float f1 = f*g * f*g * f*g * g;
float f0 = f*g * f*g * f*g * f*g;

s[0] = limit(s[0], -1.0f, 1.0f);

float y3 = (g*s[3] + f3*s[2] + f2*s[1] + f1*s[0] + f0*in) / (1.0 + r*f0);

float xx = (in - r*y3);
float y0 = g*(s[0] + f*xx);
float y1 = g*(s[1] + f*y0);
float y2 = g*(s[2] + f*y1);

s[0] += fd2 * (xx - y0);
s[1] += fd2 * (y0 - y1);
s[2] += fd2 * (y1 - y2);
s[3] += fd2 * (y2 - y3);

return y3 * (1.0f + r);

since the coefficients are all related directly to cutoff, they can actually all be placed into a look-up table of vectors. i'm not sure what the most efficient method would be.

the really interesting thing about this though is that it's still more expensive than the filter i've used for ten years which outputs similar results. (equally tunable, stable, sounds identical...)

the tanh shaping just makes it sound so muddy... what would be interesting is a much stronger saturation applied only to the feedback not in the signal path. that would start to sound like an IR3109 if adjusted correctly.

http://soundcloud.com/aciddose/boring-saturation-blah

here is an attempt at that. mp3 compression insisted on adding aliasing-like tones but these are not present in the raw output. not much tweaking, just a quick test to see if the same methods i normally use apply well here. apparently, they do.

it's possible to apply the shaping directly without modifying the calculation for s[3]'s state (and therefore all other states.) atanh works best to create the desired effect but approximations are both faster and can be tweaked to be scaled to match whatever you're modelling more accurately without scaling factors. (factors can be built into the approximation.)

still super expensive, though using this as a basic building block i believe it is possible to find good results.
coops2 - Wed May 23, 2012 1:01 am
I have read the thread and I think I get the concept now, however I am slightly unsure as to why we need the 0.5 sample delay rather than just using the input directly?
aciddose - Wed May 23, 2012 1:48 am
seems to get things more in-phase with the tanh saturation. in the version i posted it has a lot less effect.
antto - Wed May 23, 2012 1:51 am
aciddose: i so far only tryied one such "TPT" resonant filter (the diode ladder from karrikuh) and he initially used something similar to tanh() iirc
the thing was so nasty
when i replaced it with my old hard clipper (x = (x < -1 ? -1 : (x > 1 ? 1 : x))
it was so much better.. the tuning was perfect and there was no detuning when abusing the feedback level
tho there was something else which i also have in my old ladder filter (which is not TPT) - the more feedback you put the tuning starts to become like stair-like.. if you know what i mean.. that's due to precision being lost, even tho i use double everywhere

anyway, just wanted to note that a hardclipper works most of the times quite good, unless you want authentic saturation (which IMO only makes tuning worse)

There are 232 posts in this topic.
Page: 1 2 3