Need sweepable high-Q bandpass filter (State Variable?)

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

So maybe I should propose this instead (for 2-pole SVF biquad):

Code: Select all

 // update coeffs when parameters change
 f = tan(M_PI*cutoff / samplerate)
 r = f + 1 / Q
 g = 1 / (f * r + 1)

 // rest is per sample
 // calculate outputs
 hp = (in - r*z1 - z2) * g
 bp = z1 + f*hp
 lp = z2 + f*bp

 // and update state
 z1 += 2*f*hp
 z2 += 2*f*bp

The derivation is to plug TDF2 integrators in the prototype, and solve for hp (and factor the expressions a bit). Should be safe for time-variance, right?

Post

mystran wrote:So maybe I should propose this instead (for 2-pole SVF biquad):

Code: Select all

 // update coeffs when parameters change
 f = tan(M_PI*cutoff / samplerate)
 r = f + 1 / Q
 g = 1 / (f * r + 1)

 // rest is per sample
 // calculate outputs
 hp = (in - r*z1 - z2) * g
 bp = z1 + f*hp
 lp = z2 + f*bp

 // and update state
 z1 += 2*f*hp
 z2 += 2*f*bp

The derivation is to plug TDF2 integrators in the prototype, and solve for hp (and factor the expressions a bit). Should be safe for time-variance, right?
So you're storing the doubled state in the z^-1 modules? Or what's your TDF2 integrator topology?

Post

Z1202 wrote:
mystran wrote:So maybe I should propose this instead (for 2-pole SVF biquad):
...
So you're storing the doubled state in the z^-1 modules? Or what's your TDF2 integrator topology?
Wait, the bp and lp outputs are not averaged with the previous sample's values. Looks like a bug to me

Post

The integrator is:

out = state + f*in
state = out + f*in

If I'm not mistaken, this is known as transposed direct-form 2. Clearly it implements the transfer function:

f * (1 + z^-1)/(1 - z^-1)

When f = tan(w/2) this is directly the BLT transform of 1/s.
Substituting the first line into the second gives:

state = state + 2*f*in

As far as the averaging goes, yes it's there: the total "input current" (I don't know what else to call it really) is 2*f and we feed half to the output this sample, and half to the state variable only (which is equivalent to adding it to the output next sample).

I'd also like to point out that before I posted the above, I did actually check that an actual implementation works as expected (and with a simple non-linear extension to modify the band-pass feedback will even self-oscillate perfectly happily). ;)

Post

mystran wrote:The integrator is:

out = state + f*in
state = out + f*in
My mistake. I did the transposition of DF2 integrator once and obtained basically the same structure as the non-transposed one. So I thought DF2 and TDF2 are basically the same for a BLT integrator. Apparently I made a mistake there :)

Post

mystran wrote:I'd also like to point out that before I posted the above, I did actually check that an actual implementation works as expected (and with a simple non-linear extension to modify the band-pass feedback will even self-oscillate perfectly happily). ;)
Is this a boost-type of nonlinearity, increasing the damping as the signal level goes up? Do you solve for it analytically in a "true" way, or use any kind of "cheap" approach?

Post

Z1202 wrote:
mystran wrote:I'd also like to point out that before I posted the above, I did actually check that an actual implementation works as expected (and with a simple non-linear extension to modify the band-pass feedback will even self-oscillate perfectly happily). ;)
Is this a boost-type of nonlinearity, increasing the damping as the signal level goes up? Do you solve for it analytically in a "true" way, or use any kind of "cheap" approach?
Adjusted the bandpass feedback up (using a waveshaper very roughly resembling the type of resistance curve you'd get with a pair of diodes) depending on the value of the first state variable, then solving analytically with the adjusted value.

So yes, boost-type in a sense and approximately as cheap as it gets.

ps. I just did that whole thing yesterday, after your remark about time-varying behavior started bugging me... so I won't claim I've done any extensive modelling or anything ;)

pps. I'm actually (finally) becoming a believer in these types of filters. This one probably had the lowest development time of all the filters I've ever written (first working proto around 15 minutes after starting with nothing but TDF2 integrator and the basic analog topology) and it's just remarkable how much abuse the resulting filters take in terms of modulation, but even more remarkably in terms of non-linear behavior (even when implemented in the cheapest ways possible).

Post

mystran wrote:Adjusted the bandpass feedback up (using a waveshaper very roughly resembling the type of resistance curve you'd get with a pair of diodes) depending on the value of the first state variable, then solving analytically with the adjusted value.
I'm using a similar type of curve in my article considering nonlinear optimizations of TPT filters, however there the curve is fixed :)
mystran wrote:pps. I'm actually (finally) becoming a believer in these types of filters. This one probably had the lowest development time of all the filters I've ever written (first working proto around 15 minutes after starting with nothing but TDF2 integrator and the basic analog topology) and it's just remarkable how much abuse the resulting filters take in terms of modulation, but even more remarkably in terms of non-linear behavior (even when implemented in the cheapest ways possible).
:)

Post

Z1202 wrote:
mystran wrote:Adjusted the bandpass feedback up (using a waveshaper very roughly resembling the type of resistance curve you'd get with a pair of diodes) depending on the value of the first state variable, then solving analytically with the adjusted value.
I'm using a similar type of curve in my article considering nonlinear optimizations of TPT filters, however there the curve is fixed :)
Well, I actually just applied a fixed curve to the state variable as well, then add it with a "linear feedback path" for the nominal resonance. For "sufficiently low frequencies"(tm) this is pretty much equivalent to having independent linear and non-linear feedback paths in an analog filter.

Looking at SEM-1A (which I guess is pretty close to the canonical analog-synth SVF?) schematics (and rudely ignoring the presence of the DC blocking capacitor in the feedback path), that's pretty much what we have: the feedback paths branch directly from the buffer, and the summing OTA forces it's V- node to zero, so we're just adding together currents through two parallel paths.

Actually I'm not convinced it makes sense to try anything more complicated here; at audio frequencies the diodes should be more or less memory-less and we're essentially just adding half-a-sample delay to the non-linearity. Trapezoidal integration introduces similar error anyway, and both can be reduced simultaneously by simply oversampling more.

If you want to consider the OTAs non-linear too, I might go with an even cheesier trick. ;)

Post

I haven't checked back here for a while so I'm surprised that this thread was still going.

Anyway, I tried to use a simple state variable filter with a high resonance setting (from 0.95 to 0.98 ). This works very well in filtering out just the frequency I'm interested in (with a 28dB boost at the center frequency).

However the problem is that the filter settling time is way too slow, especially at lower frequencies. As I determine the frequency and adjust the filter's center frequency in realtime to "hone in" on the frequency, it becomes unstable because the detected frequency is affected by simply changing the center frequency.

BTW I tried various methods of detecting the frequency (including autocorrelation), but since I'm essentially dealing with a very clean sinewave after the filter, I found that a simple zero-crossing detector with Hermite interpolation to detect the actual crossing points gives me very accurate results (error < 0.001%), as long as the filter's center frequency is not altered. As soon as I adjust the filter CF (even slightly) to that of the previously detected frequency, the detected frequency changes, even if the input frequency didn't change. Hope that makes sense.

Any ideas on how to get a filter with faster settling time that still has a high Q?

Post

BitFlipper wrote:Any ideas on how to get a filter with faster settling time that still has a high Q?
As a general rule the settling time grows with Q. Intuitively this can be related to the uncertainty principle which gives a lower threshold on the time-localization of a given frequency range, therefore I doubt there's a straightforward way around it, not in filtering terms at least.

Regards,
{Z}

Post

I just think of it as if there is only a fixed amount of phase per filter stage - the more Q the more phase bunches up around that frequency - they are linked by good old physics. You could probably ramp down the output when there is no input, but that would be messy - why is settling time so important in your application?

Andrew

Post

Ichad.c wrote:I just think of it as if there is only a fixed amount of phase per filter stage - the more Q the more phase bunches up around that frequency - they are linked by good old physics. You could probably ramp down the output when there is no input, but that would be messy - why is settling time so important in your application?

Andrew
As I said this is for analysis purposes. The idea is to accurately track the frequency of a specific frequency when the general location of that frequency is known (I have the course freq, I require the fine frequency). This signal also contains other frequencies hence the requirement for a high-Q filter.

However while it works well in general, as soon as slight corrections to the state variable filter is made to more accurately track that frequency, the detected frequency value "oscillates" around the true frequency for a few seconds. This time is too long, and needs to be reduced to less than one second for the results to be useful.

Post

BitFlipper wrote:
Ichad.c wrote:I just think of it as if there is only a fixed amount of phase per filter stage - the more Q the more phase bunches up around that frequency - they are linked by good old physics. You could probably ramp down the output when there is no input, but that would be messy - why is settling time so important in your application?

Andrew
As I said this is for analysis purposes. The idea is to accurately track the frequency of a specific frequency when the general location of that frequency is known (I have the course freq, I require the fine frequency). This signal also contains other frequencies hence the requirement for a high-Q filter.
What the guys are trying to say, is that when you measure for a small amount of time, you can only get a rough estimate of a frequency. To improve this estimate, you have to measure for a longer period of time, so you lose in terms of time localization. This is pretty universal principle and orthogonal to the actual method used (be it filters or FFT or whatever).

In terms of time-domain there is no frequencies. In terms of spectrum there is no time. You can "rotate" between these two and get various trade-offs, but they are all trade-offs. See Uncertainty principle: Signal processing.

Note that if you can make assumptions about the signal (eg single frequency with fixed amplitude, or something) then you might be able to find a way to do better:
Wikipedia wrote: it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other.
BitFlipper wrote: However while it works well in general, as soon as slight corrections to the state variable filter is made to more accurately track that frequency, the detected frequency value "oscillates" around the true frequency for a few seconds. This time is too long, and needs to be reduced to less than one second for the results to be useful.
If you get oscillation in the control, then the Q of the control-signal loop is too high. If the frequency of the oscillation is too long, you could increase the "cutoff frequency" of the control loop. In some sense, it's a filter too. You might want to look at the "control theory" literature (the actually theory is pretty much the same, but the domain of application is different which might or might not lead to better intuitive understanding).

Post

I think I know why the measured frequency changes when I make adjustments to the state variable filter. Basically, this causes the phase shift at that frequency to change. This doesn't happen instantaneously of course but it is spread out over some number of samples. This results in the waveform being stretched or compressed to adapt to the new phase shift. During that time, the frequency is indeed different from the input frequency and my measurement is essentially correct.

So one way to possibly fix it is to calculate the expected change in frequency during that time and add it into the control loop as a way to dampen the system.

Post Reply

Return to “DSP and Plugin Development”