Accelerate/vDSP Biquad filter

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Any chance to get someone here having already used the new biquad filter provided by Apple?

I'm using the biquad filter provided by vDSP in accelerate framework and especially the single-channel biquad IIR filter. I have a couple of questions:

1) I'm wondering why there is no equivalent of the multichannel function vDSP_biquadm_SetTargetsDouble in the single filter version ? This method provides smooth transition while updating coefficients on the fly.

2) I would like to update the coefficients on the fly in the single version too, do I have to do it manually? So call method vDSP_biquad_DestroySetupD on a previously created setup and then vDSP_biquad_CreateSetupD for the new one ?

3) vDSP_biquadD deals with arrays, so no chance to smooth the coefficients sample by sample. Is it safe (performance) to call this mehod sample by sample?

Thanks for feedback,

Post

Unless you work on an academic app to place/visualize/whatsoever poles and zeros, I'd strongly recommend to stay away from biquad filters. They're all over DSP literature but they have not much to offer in modern practical DSP.

The main problem with biquads filters is that there is no concept of manipulating coefficients, say, to sweep cutoff frequency or resonance, or to warp one filter type into another. There may be unwanted and unpredictable artifacts if you do so (DC offsets, gain changes and more). Other problems are numerical stability (not sure if the Apple ones deploy noise shaping) and exaggerated computation of coefficients. That means, you'd need to go double precision and you'll waste a lot of CPU just to compute the coefficients.

If you want numerical accuracy even in single precision floating point, simple coefficient calculation and sweepable cutoff/resonance/type, have a look at Andy Simper's implementation of State Variable Filters:

http://www.cytomic.com/technical-papers

There's pretty much nothing a cascade of 2-pole biquads can do that a cascade of SVFs can't, at possibly less effort and higher numercial accuracy. However, it hasn't made it into standard DSP literature yet, so no-one has done fancy pole zero plots for it yet, afaik :clown:

Post

I understand that is better to stay away of the biquads and prefer the SVF but it seems that the Biquad filter provided by Apple is quite new and should face the issues you highlighted.

There is a special method to manipulate the coefficients. Apple provides in the multichannel version the method vDSP_biquadm_SetTargetsDouble. It's seems that the method prepares the setup object to smooth the parameters changes over the process.

https://developer.apple.com/library/mac ... lementID_3

You noticed also waste of CPU but its seems that the accellerate framework aim it's to provide a set of functions optimised on performance.

Regarding Biquad on its own, I found a lot of AudioUnit examples provided by Apple using them. Juce framework filter it's also Biquad, they are everywhere :?

Post

FredAnton wrote:they are everywhere :?
Indeed. They're like a plague.

I guess Apple uses some form of crossfading, either by moving poles and zeros per sample, or by interpolating coefficients or by crossfading the output of two independently filtered input signals. Either way, the waste of CPU cycles is immense for a process that is easily replaced by a more robust, faster and more accurate method like the SVF (or any other implicitly solved analogue model using trapezoidal integration, such as Sallen-Key)

Sometimes it occurs to me that biquad filters are made up by boffins just to look smart. I understand though that back then when division was really hard on CPUs, a scheme with just addition and multiplication (or lookup tables) was without alternative. Which might also explain why Hal Chamberlin back then introduced unit delays instead of solving the State Variable implicitly, but I digress...

Post

Urs wrote:I understand though that back then when division was really hard on CPUs, a scheme with just addition and multiplication (or lookup tables) was without alternative.
Which is still no reason to prefer direct form biquads to ZDF SVF. The division comes from the trapezoidal integration and is present in both schemes. If the coefficients are not changing with every sample, the ZDF SVF can be also reduced just to a small number of additions and multiplications, pretty much comparable to DF biquads.

Post

Z1202 wrote:
Urs wrote:I understand though that back then when division was really hard on CPUs, a scheme with just addition and multiplication (or lookup tables) was without alternative.
Which is still no reason to prefer direct form biquads to ZDF SVF. The division comes from the trapezoidal integration and is present in both schemes. If the coefficients are not changing with every sample, the ZDF SVF can be also reduced just to a small number of additions and multiplications, pretty much comparable to DF biquads.
Yup, of course :dog:

Post

Urs wrote: If you want numerical accuracy even in single precision floating point, simple coefficient calculation and sweepable cutoff/resonance/type, have a look at Andy Simper's implementation of State Variable Filters:

http://www.cytomic.com/technical-papers
I've roughly tested the pseudo code describe in the paper, so if I understand properly: first we set the coefficients (here in my example the LowPass) then we call tick on each sample. Updating the coefficients can also be done on the fly by calling setLowPass with a smoothed cutoff... After doing a basic sweep on a white noise I found the amplitude lower than expected... Did I miss something in the implementation ? By the way I use a Q of 0.1 for test purpose.

Code: Select all

       void setLowPass(float cutoff, float samplerate, float Q) {
            g = tan((cutoff/samplerate)*M_PI);
            k = 1/Q;
            a1 = 1/(1 + g*(g + k));
            a2 = g*a1;
            a3 = g*a2;
            m0 = 0;
            m1 = 0;
            m2 = 1;
        }

Code: Select all

        float tick(float v0) {
            v3 = v0 - ic2eq;
            v1 = a1*ic1eq + a2*v3;
            v2 = ic2eq + a2*ic1eq + a3*v3;
            ic1eq = 2*v1 - ic1eq;
            ic2eq = 2*v2 - ic2eq;
            return  m0*v0 + m1*v1 + m2*v2;
        }

Post

Haven't studied andrew's code. Maybe Q means something different in his code, dunno.

By the conventional definition, Q is usually somewhere in the range of 0.5 to infinity. For a second order filter, butterworth is usually a Q of 1 / sqrt(2) = 0.7071

It is possible for Q to be less than 0.5, but Q is usually >= 0.5

Perhaps your selection of Q gives unexpected results?

Post

JCJR wrote:Haven't studied andrew's code. Maybe Q means something different in his code, dunno.

By the conventional definition, Q is usually somewhere in the range of 0.5 to infinity. For a second order filter, butterworth is usually a Q of 1 / sqrt(2) = 0.7071

It is possible for Q to be less than 0.5, but Q is usually >= 0.5

Perhaps your selection of Q gives unexpected results?
I've checked in the example provided in the technical paper (seems to be Mathematica), the Q is set to 0.5, I'll check that tonight. Thanks!

Post

Thanks Fred

I should read the paper and try to learn something.

If Q = 0.5 is used as an example, then here is a possible reason (maybe explained in a doc I lazily have not read)-- Old early variants of the digital state variable filter worked good but became unstable above a certain frequency. They tended to retain stability up to higher frequencies with higher Q settings, and tended to go unstable at lower frequencies given low Q settings.

So if his state variable filter can stay stable over the full frequency range at Q = 0.5, then it is a great proof of concept, a wonderful improvement for digital state variable filters.

Analog state variable filters did not have that frequency dependent stability issue so far as I know.

Post

JCJR wrote:So if his state variable filter can stay stable over the full frequency range at Q = 0.5, then it is a great proof of concept, a wonderful improvement for digital state variable filters.
This is the gist of *cough* zero delay feedback filters *cough*

The often quoted implementation of Hal Chamberlin in "Musical Applications of Microprocessors" is what one could call a model of an analogue circuit that more or less resembled its actual structure - unlike any Direct Form biquad filter. However, Hal Chamberlin also used the term "unit delay" to express the idea that impicite solving was somehow impossible when discretizing the circuit. He elegantly circumvented the problem by using Euler integration which gave him the freedom to a) solve the filter equations explicitly and b) use the required unit delay for integration. However, the state was merged with the outputs and classic circuit analysis (Kirchhoff...) was bypassed.

Nevertheless, as Andy's papers show it's not impossible (nor difficult) to solve the circuit implicitly based on classic circuit analysis. Furthermore using trapezoidal rule is a more accurate one-step integration method than Euler.

As a result, when using latter - direct solving of implicite circuit equations aka zero delay feedback paired with trapezoidal integration formerly discussed as zero delay feedback in a one pole filter - all the problems of the olden SVF go away. One can use Andy's SVF and sweep cutoff all up to Nyqvist and they won't explode at any Q.

This is what "it" was about, and it has become something else entirely, if not a paradigm change, for some. In any case, for those who still doubt the usefulness of the zdf discussion back then, compare the original Chamberlin (or a DF biquad implementation fwiw) with Andy's SVF (or Vadim's in his book) when quickly sweeping cutoff from DC to Nyqvist and back :clown:

(sorry, couldn't resist bringing it up...)

Post

Thanks Urs for taking the time to make such a good explanation. Am not being intentionally obtuse, and I did read z1202's book awhile back. Next time I need non-trivial IIR filters will try out zdf. Was always more interested in using filters rather than designing filters. Mathematical memory is not so great, so it won't be beneficial to intensely study zdf until it is time to implement. I'd just have to study it again when I get around to writing code. :)

It may seem strange but for my last few years writing music software I didn't need more than occasional simple filters. Was programming on features such as GUI, notation, midi, audio chord detection, playback and recording, plugin hosting, mixing, algorithmic composition, pitch and time stretch, assembling audio tracks on the fly out of bits and pieces of disk audio files, etc.

Had already reached retirement age, but finally had to quit full time programming because I couldn't stay awake in front of the computer anymore. Ideopathic hypersomnia they called it. Which may be the medical terminology for "Burned Out On Programming". It is not severe during non-computer tasks, or non-programming activity on the computer. But sitting down for a day of programming was like taking a handful of quaaludes. :)

So lately I program an hour or two a day if I feel like it. I will eventually do something that needs IIR filters, and then try the zdf.

A couple of curiosity questions re zdf--

* Does the zdf svf have completely complementary hipass, bandpass, and lopass outputs? Long ago tweaking on chamberlin's svf, it seemed that one could try to tweak one of the outputs, but that only made the other two outputs' response shape even less ideal.

* Can the zero delay technique somehow cure sloppy filter curves when IIR filters are tuned above about fs/8? Or would one still need to oversample (assuming source samplerate 44 or 48 k) in order to get good filter shapes in the higher octaves?

It seems that zdf discussions frequently involve nonlinearity and such. But it is rare that I would need to sweep a filter. Distortion is the last thing I'd want. Clean, boring and predictable would do quite nicely! :)

Post

Woah, James, sorry to hear about your condition :-|

I think we shouldn't call it "zero delay feedback" anymore. What Andy does in his papers is simply a different approach to filter design which is automatically without unit delay. Instead of transforming poles and zeros from an s-plane to a z-plane (abstract model) he solves the currents and voltages of the analogue circuit directly using Ohm's law, Kirchhoff's voltage law, some maths and the trapezoidal rule. What started out as "zero delay feedback" back then has become "Topology Preserving Transform" or "implicite circuit discretization based on classic nodal analysis and trapezoidal rule" or whatever. I don't think there's any fancy name yet, but zdf doesn't quite capture it. "Circuit Modeling" might capture it but is too ambiguous...

With this SVF, yes you can mix filter outputs (or inputs) as you wish, they are fully complementary or "in phase". This is how shelving, peaking, notch and allpass types are realized.

Some of the sloppy curves might have been related to other effects than the mapping of infinite Hz to Nyqvist. I don't know exactly how these filters' phase/magnitude response does in comparison to analogue ones near Nyqvist.

As for non-linearity... these paper show how simple it is to compute a full "non-abstract" equivalent to biquad filters without the obvious disadvantages. One of the advantages of "preserving topology" is that the resistors, transistors, diodes, OpAmps etc. have direct equivalents in the equations. So instead of using ideal elements, i.e. passive resistors, one can as well use non-linear ones. This however drives the computational cost of solving them implicitly into CPU smoking realm. Which is why a lot of the discussion really isn't about "zero delay feedback" anymore but about methods and maths to solve these equations with tricks and cheats to bring computational cost down.

Post

One thing that keeps me sticking to traditional s-plane/z-plane filter design for EQ (static filtering) applications is the possibility to "decramp" the frequency response near Nyquist (Orfanidis' paper).
I guess you can't achieve this with trapezoid integration of physical models, can you? I once experimented with decramping the integrators, but no luck.

Post

karrikuh wrote:One thing that keeps me sticking to traditional s-plane/z-plane filter design for EQ (static filtering) applications is the possibility to "decramp" the frequency response near Nyquist (Orfanidis' paper).
I guess you can't achieve this with trapezoid integration of physical models, can you? I once experimented with decramping the integrators, but no luck.
Well, Andy in his papers describes how to get from the biquad coefficients a0 ... b2 to the SVF coefficients back and forth. By SVF coefficients he means factors that can be chosen independently for cutoff, resonance, HP, LP and BP outputs. Both methods are completely interchangeable, but the SVF method requires less of a headache or two, both in theory and practice.

I've never dealt with "decramping", but I'm sure it's possible with the SVF as well. Someone just needs to write it down. In my first post I wasn't aware that Andy had already done the whole pole zero mumbo jumbo, or H(s) -> H(z) for this method, I guess it's just a question of time until anything found in papers about biquads will be available written down for this method as well. He should give it a catchy name such as "SVF Audio EQ Cookbook".

- U

Post Reply

Return to “DSP and Plugin Development”