Cheap non-linear zero-delay filters

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

mystran wrote:
jupiter8 wrote:Mystrans's Cheap non-linear zero-delay filter,soon in every synth. :D
Well, hopefully not. Hopefully "mystran's cheap non-linear zero-delay filters" in every synth that doesn't have anything better already. ;)
More seriously though: even if everyone started using something like this, I doubt it'll lead to "every filter sounds the same" problem any more than the existing practice. You can already hear tons of simple synths use the same few old models, yet quality synths still manage to have their own sound.

For what it's worth, I have no intention of using the ladder from the second post as such. It's a simplified models, with details stripped that (1) I think are important for the transistor ladder sound but (2) contribute nothing to the discussion here. :)

Post

izonin wrote:Is this one based on Vadim's 0df filter, or is it a new concept?
I think the "credit" chain goes something like:

Vadim wrote the paper (is this the correct one to reference), which actually does pretty much the same as I'm doing; all I'm doing really is throwing away any iterations. I think this showed everyone that there's something important in here.

Neotec started an insightful thread about this stuff last year. I'd like to credit him from the insightful use of TDF2 integrator which turns out to be a wonderful choice. In the thread we discuss shaping of state variables, and if rewrite the state updates the way I do, this gives another simple method (in fact I just started from improving that method).

Also Urs (or rather U-he) released Diva, which made everyone (including me) realize that we really should forget about the old fudge-factor models. Certainly should also credit Andrew for the Drop too, even if that's not available yet (some of the samples he has posted certainly sound a lot better than anything I've done). Without these two guys I'd probably never got interested in the whole thing (since I had some reasonably well-working "traditional" filters).

Now, all I did was figure out a simple simplification (somewhat by accident). This was actually only supposed to be a "prediction" step for an iterative solver. Only after I tried to iteratively improve the solution (without getting much audible benefit out of it) I started analyzing what was going on and realized I'd found a prediction that (at least for the simple cases which we've discussed so far) was good enough that you might use it directly.

Basically: Credit to Vadim, Urs, Andy, neotec (what's his real name?) and everyone else that has contributed to the various discussions of zero-delay filters in the past few years. My contribution is rather small; I'm standing on the shoulders of giants. ;)

Post

mystran wrote:
izonin wrote:Is this one based on Vadim's 0df filter, or is it a new concept?
I think the "credit" chain goes something like:

Vadim wrote the paper (is this the correct one to reference), which actually does pretty much the same as I'm doing; all I'm doing really is throwing away any iterations. I think this showed everyone that there's something important in here.

Neotec started an insightful thread about this stuff last year. I'd like to credit him from the insightful use of TDF2 integrator which turns out to be a wonderful choice. In the thread we discuss shaping of state variables, and if rewrite the state updates the way I do, this gives another simple method (in fact I just started from improving that method).

Also Urs (or rather U-he) released Diva, which made everyone (including me) realize that we really should forget about the old fudge-factor models. Certainly should also credit Andrew for the Drop too, even if that's not available yet (some of the samples he has posted certainly sound a lot better than anything I've done). Without these two guys I'd probably never got interested in the whole thing (since I had some reasonably well-working "traditional" filters).

Now, all I did was figure out a simple simplification (somewhat by accident). This was actually only supposed to be a "prediction" step for an iterative solver. Only after I tried to iteratively improve the solution (without getting much audible benefit out of it) I started analyzing what was going on and realized I'd found a prediction that (at least for the simple cases which we've discussed so far) was good enough that you might use it directly.

Basically: Credit to Vadim, Urs, Andy, neotec (what's his real name?) and everyone else that has contributed to the various discussions of zero-delay filters in the past few years. My contribution is rather small; I'm standing on the shoulders of giants. ;)
The reason I'm asking is because I have Vadim's 0df filter as a Reaktor ensemble, and since it's similar I could try and adapt it to this new model.
As far as crediting goes I think UAD's Moog Multimode also deserves to be mentioned, as, I believe, it inspired both Urs and Andy.

Post

mystran wrote:
mystran wrote:

Code: Select all


 r' = r / (1.458 * 1.96) ~ r / ( 2.86 ) 

 dV0/dt = f * tanh( 1.1 * in - r' * tanh(1.96 * V3) - v0) 
 dV1/dt = f * tanh( 1.1 * v0 - v1 ) 
 dV2/dt = f * tanh( 1.1 * v1 - v2 ) 
 dV3/dt = f * tanh( 1.1 * v2 - v3 )


[...]

edit: sound sample for the above (if I didn't make any mistakes) http://www.signaldust.com/files/cascade.mp3
(mp3 but high bitrate.. oh and 44.1kHz host rate with x4 oversampling)
Since I edited the sound sample into the post afterwards and we transitioned to the next page so someone might no have noticed it (and nobody commented), does any of you like that sound? (and apologies for crappy playing)
I think it sounds quite nice although I can't tell how authentic it is.
mystran wrote:Karrikuh specifically: are you getting similar results to the above?
Not sure, but maybe you can judge for yourself, I created a couple of test sounds:

http://www.blaukraut.info/downloads/filter_test.rar (8.4 Mb)

It contains slow resonance sweep over a pure sawtooth wave with different pitches and filter input gains. The artifacts I mentioned earlier that were slightly irritating to me are found with the high pitched input (note C5). Here one notes the strong emphasis and distortion when the resonance sweeps over a harmonic of the input so that the overall sweep is not quite smooth.

Further more, there are some standard sounds, all with resonance set relatively high. There's also a comparison of of your full OTA implementation with 5 tanhs and a simplified one with only feedback clipping. Finally, I included a sawtooth sequence running through a 2-pole LPF based on your approach. I think the latter has a resonance characteristic extremely similar to Diva's Sallen-Key filter.

EDIT: everythin @ 2x oversampling

Post

mystran wrote:Oh and AQ: if you use this you own me a Poly-Ana. ;)
I will seriously take a look at it, thanks! And you're on, deal!

Post

I read somewhere that sallen-key filters use positive feedback, is that true?

Post

Hmmh, maybe you're right that the difference between the full model and simplification is somewhat subtle. The reason is probably that for frequencies below the cutoff, the output (of a stage) tracks the input (of the same stage) fairly closely, so the difference will be small and the gain will be more or less linear. Intuitively that means the most obvious distortion (ie from higher frequencies) gets filtered out and some cutoff modulation is likely the most obvious observable effect (there is no cutoff modulation whatever if you simplify to single clipper).

As for the C5 note and artifacts: here's what my new toy does (if I drop the resonance below self-osc then it agrees better). With the oscillator disabled, it's clean except for faint control noise. So compared to that I'd rather say "model sounds too nice" though it's quite possible the vintage chips were smoother and/or the VCA is doing something evil, or the tanh() is not a good model for the gain cells and/or the feedback limit (I suspect this might be the case), or any number of other things. That said, I'm probably going to keep the model (I like it, and I'm more into nice sounding stuff than exact emulation), at least until I get my hands on a vintage chips (assuming that ever happens). :P

Your 2-pole is lovely btw. :)

Post

Ichad.c wrote:I read somewhere that sallen-key filters use positive feedback, is that true?
Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.

Post

mystran wrote:
Ichad.c wrote:I read somewhere that sallen-key filters use positive feedback, is that true?
Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.
Are you sure? Poly's is negative. As is my understanding of Moog and various other classic designs.

Post

AdmiralQuality wrote:
mystran wrote:
Ichad.c wrote:I read somewhere that sallen-key filters use positive feedback, is that true?
Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.
Are you sure? Poly's is negative. As is my understanding of Moog and various other classic designs.
I said "at least around the cutoff frequency" and that's the important part. For the ladder this happens to be negative at DC because cutoff is 180 degrees out of phase, so negative gain is specifically how you make it positive at cutoff.

But the reason I said what I said, is that I'm aware of these myths that say the "positive feedback" in a Sallen-Key is somehow responsible for the sound of a particular filter. It's not, it's the non-linearities of a particular design that matter, and you can't simplify it into a "positive" vs "negative"!

Nobody has complained about the fact that I've been writing down differential equations that are totally wrong as far as "positive" vs "negative" goes. All the OTA buffers and transistor ladder stages and whatever are actually inverting, while I've been writing them as if they were not. The reason I've been doing that is that it doesn't matter as long as you are consistent (ie if you have an asymmetric non-linearity, and you invert the signal going through that, then you have to invert the non-linearity too).

Post

mystran wrote:
AdmiralQuality wrote:
mystran wrote:
Ichad.c wrote:I read somewhere that sallen-key filters use positive feedback, is that true?
Any self-oscillating filter will necessarily use positive feedback, at least around the cutoff frequency.
Are you sure? Poly's is negative. As is my understanding of Moog and various other classic designs.
I said "at least around the cutoff frequency" and that's the important part. For the ladder this happens to be negative at DC because cutoff is 180 degrees out of phase, so negative gain is specifically how you make it positive at cutoff.
Gotcha.

But the reason I said what I said, is that I'm aware of these myths that say the "positive feedback" in a Sallen-Key is somehow responsible for the sound of a particular filter. It's not, it's the non-linearities of a particular design that matter, and you can't simplify it into a "positive" vs "negative"!

Nobody has complained about the fact that I've been writing down differential equations that are totally wrong as far as "positive" vs "negative" goes. All the OTA buffers and transistor ladder stages and whatever are actually inverting, while I've been writing them as if they were not. The reason I've been doing that is that it doesn't matter as long as you are consistent (ie if you have an asymmetric non-linearity, and you invert the signal going through that, then you have to invert the non-linearity too).
Cool. I can't pretend to understand it all, but will let you know when I've had a chance to try it.

Post

I have a question about delayless filters:
Is there some advantage to them as opposed to oversampling the same thing? Can oversampling work as well or does it have different problems? I could see the non-linearity working just as well, but what about the tuning and modulation?

Post

camsr wrote:I have a question about delayless filters:
Is there some advantage to them as opposed to oversampling the same thing? Can oversampling work as well or does it have different problems?
Not that I'm super confident in my understanding of things, but ... Eventually it boils down to the basic question of solving geometries. One can divide and conquer the problems by oversampling, but with some math-y transformation of the problem there's opportunity to divide in a more sophisticated way. Solutions that describe the infinitesimal interval of oversampling can be engineered to an accuracy that's good enough for the task of producing enriched audio signals. And, it's possible to do this quickly enough for real-time performance on a PC - perhaps costly in terms of CPU but - I'd say there's a reliable consensus that it's still overwhelmingly worth it to get a certain kind of sound.

Post

as far as i understand it..
if we consider the speed of current, calculate some okay-ish time period for the shortest analog feedback loop, and use such a high sampling rate, i guess there will not be a need for doing fancy stuff like zero delay
i'm talking about probably some uber-giga-hyper insane sampling rate

BUT there comes the problem with rounding and low cutoff frequencies..

so, you cannot just oversample like mad..
It doesn't matter how it sounds..
..as long as it has BASS and it's LOUD!

irc.libera.chat >>> #kvr

Post

could you replace the averaging "half sample delay" with lerp() so it's clear to anyone reading the code what is happening?

have you tried more advanced interpolations?

Code: Select all

	template<class T, class F>
	INL T lerp(const T &A, const T &B, const F &X)
	{
		return A + (B - A) * X;
	} 

	template<class T, class F>
	INL T lerpcos(const T &A, const T &B, const F &X)
	{
		return A + (B - A) * (((F)1.0 - cos(X * (F)3.14159)) * (F)0.5);
	}

	template<class T, class F>
	INL T lerpparabola(const T &A, const T &B, const F &X)
	{
		return A + (B - A) * (X*X * ((F)2.0 - X*X));
	}

	template<class T, class F>
	INL T lerpsinc(const T &A, const T &B, const F &X)
	{
		return A + (B - A) * ((F)1.0 - lanczos<F>(X));
	}

	template<class T, class F>
	INL T cubic(const T &A, const T &B, const T &C, const T &D, const F &X)
	{
		T E = B + D - C - A;
		T F = A - B - E;
		T G = C - A;
		return B + E*X*X*X + F*X*X + G*X;
	}

	template<class T, class F>
	INL T hermiteb(const T &A, const T &B, const T &C, const T &D, const F &X)
	{
		const T E = (B - C) * 1.5 + (D - A) * 0.5;
		const T F = C*2.0 - B*2.5 + A - D*0.5;
		const T G = (C - A) * 0.5;
		return B + E*X*X*X + F*X*X + G*X;
	} 

        // not sure about this one, not much better cut than hermite, but phase is different
	template<class T, class F>
	INL T intb(const T &A, const T &B, const T &C, const T &D, const F &f)
	{
		const T W = (C - B) * 3.0 + (A - D);
		const T X = (B - C) * 7.5 + (D - A) * 2.5;
		const T Y = (C + A) * 0.5 - B;
		const T Z = (C - A) * 0.5;
		return B + W*f*f*f*f*f + X*f*f*f*f + W*1.5*f*f*f + Y*f*f + Z*f;
	}

//edit: forgot these...

	template<class T>
	INL T sinc(const T &X)
	{
		return sin(X) / (1E-10 + X);
	}

	template<class T>
	INL T lanczos(const T &v)
	{
		return sinc<T>(v * (T)3.14159 / 2.0) * sinc<T>(v * (T)3.14159);
	}

	template<class T, class F>
	INL T sinci(const T &A, const T &B, const T &C, const T &D, const F &X)
	{
		return (A)*lanczos(-1.0f - X) + (B)*lanczos(-X) + (C)*lanczos(1.0f-X) + (D)*lanczos(2.0f-X);
	}
lerpcos is too expensive unless you use an approximation for cos. i haven't bothered to implement it myself yet but it's simple enough if anyone wanted to try.

lerp cos, parabolic and sinc have similar frequency response but different phase.
Last edited by aciddose on Sun May 20, 2012 11:23 pm, edited 1 time in total.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post Reply

Return to “DSP and Plugin Development”