Multi-stage oversampling and tuning of associated high order filters

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

sault wrote:If I need hi-fi quality then sure, I'm going to go with FIR filters. If I'm building a guitar pedal plugin and need as close to real-time as possible with 8x or 16x oversampling, then I'm building an interpolator on top of a min-phase FIR..
I probably haven't been putting the point across well. Hopfully this is clearer. Your interpolator is a FIR, only difference is you're having to calculate all the coefficients on the fly

Post

Ivan_C wrote: Let's imagine that we have some oversampling library, allowing to oversample from 2 times up to 32 times using a multi-stage approach. That means that the whole process is divided in multiple sub processes oversampling 2 times only with low pass half-band filters.
The first thing you want to do is throw away that library and find one that can do this in a single pass, because doing a multi-stage approach like this is just absolutely pointless waste of CPU.
Now, I would like to choose the best options for every filter being used there. I can choose between two polyphase approaches, FIR (linear phase so a lot of latency) vs IIR (minimum/nonlinear phase). And whatever the approach, I can choose for every single stage the attenuation and the bandwidth of the half-band filters (both up and down filters).
Don't use half-band filters. Use a poly-phase FIR going straight from the source rate to the target rate.
  • The main goal of multi-stage oversampling is to reduce CPU load thanks to half-band filters properties and associated polyphase implementation.
Non-sense. The main goal of multi-stage oversampling is to avoid having to write poly-phase FIR code because you are too lazy to figure it out. Seriously, that's the only valid reason to use half-band cascades that I can think of.
[*]Upsampling filter and downsampling filter don't have the same purpose : upsampling filter is there to remove the zero stuffing artefacts before the actual oversampled process, and downsampling filter purpose is to remove the useless information after Fs/4 to prevent aliasing.
This has absolutely nothing to do with whether or not you want to multi-stage (which you most definitely do not want to do). While the up-sampling images from zero-stuffing are typically higher amplitude than the partials from band-width expansion that need to be removed to prevent aliasing, in practice the latter tends to cause more (audible) problems, so it actually ends up making sense to use roughly the same filter in both directions (plus you save the trouble of having to keep two filters around).

Post

Non-sense. The main goal of multi-stage oversampling is to avoid having to write poly-phase FIR code because you are too lazy to figure it out. Seriously, that's the only valid reason to use half-band cascades that I can think of.
What's the latency for a single poly-phase FIR (let's say for 32X) and how does that compare to multiple stages of half-band IIRs?

Post

Andrew J wrote:
Non-sense. The main goal of multi-stage oversampling is to avoid having to write poly-phase FIR code because you are too lazy to figure it out. Seriously, that's the only valid reason to use half-band cascades that I can think of.
What's the latency for a single poly-phase FIR (let's say for 32X) and how does that compare to multiple stages of half-band IIRs?
The latency is the same as any FIR and depends on the kernel!

For linear-phase it's half the filter length at the higher sample-rate, or equivalently half the "taps per branch" at the lower sample-rate. Since for any given quality target you need (roughly) the same number of taps "per branch" independent of how many branches you actually have (eg. 32x would have 32 branches), you would typically have the same actual latency independent of how much you oversample.

For minimum-phase the latency is obviously much less, but it's frequency dependent and depends on the response.

edit: in case you're wondering, you won't beat the latency either with multi-stages, since your first 2x filter already needs as much latency as the whole polyphase for the same quality, which means that any latency from later passes just adds more on top of what you actually need

Post

Has anyone here used a cepstrum to build minimum phase FIR? Is this a reasonable method?

Post

Polyphase vs multistage is a good old topic.
See, for example:
viewtopic.php?f=33&t=469205


Designing a polyphase filter is in my opinion much simpler than chaining optimized decimators (each decimator filter order and characteristics being optimized by taking account of the characteristics of the other stages + use of zero coefficient and symetries ).

In my experience, a well designed decimator chain can be more CPU efficient than polyphase but it is constrained to powers of two decimation ratios.

Here is some jsfx code, with a chain of optimised /2 decimators:
https://forum.cockos.com/showpost.php?p ... stcount=44
See you here and there... Youtube, Google Play, SoundCloud...

Post

matt42 wrote:Has anyone here used a cepstrum to build minimum phase FIR? Is this a reasonable method?
Yeah it works, just keep your magnitude response strictly positive (eg. by adding a tiny constant; this can be like -400dB though) so the logarithm stays finite. You should also expect slightly imperfect results since there will be some aliasing... but in practice it's not a huge deal for filters of any reasonable length.

Post

Ivan_C wrote:OK I got it, yes you are totally right the upsampling filter needs to be very good to prevent anything bad happening because of the zero stuffing artefacts ;)
This is effectively semantics, for most, but zero stuffing introduces no artifacts. It simply changes the sample rate—no change in the the spectrum. But that means that some of the aliased (mirrored) parts of the spectrum now reside below the Nyquist frequency (half the sample rate). They didn't move, the Nyquist frequency moved. The filtering is about removing that stuff before moving on (say, with non-linear processing). It was always there—no new artifacts—you just didn't care about it before because your DAC would remove it when you played the audio.

http://www.earlevel.com/main/tag/sampli ... ?order=asc
My audio DSP blog: earlevel.com

Post

Smashed Transistors wrote:Polyphase vs multistage is a good old topic.
But polyphase and multistage are different things, no? You can have a multistage SRC with polyphase filters, for instance. It's like saying "lowpass versus delay elements".
My audio DSP blog: earlevel.com

Post

mystran wrote:
Ivan_C wrote:The main goal of multi-stage oversampling is to reduce CPU load thanks to half-band filters properties and associated polyphase implementation.
Non-sense. The main goal of multi-stage oversampling is to avoid having to write poly-phase FIR code because you are too lazy to figure it out. Seriously, that's the only valid reason to use half-band cascades that I can think of.
Wait...are you saying there are no efficiencies to be had using multistage SRC, and the only reason is a lazier derivation? That's certainly not true.

The half-band component is a separate issue. There are situations where you can use a half-band filter and get an added efficiency, and plenty of times they won't work out. But that's just a bonus if it works, not the reason to used multistage.
My audio DSP blog: earlevel.com

Post

And this leads back to the point of my question about the latency. I wasn't asking about differences between single-stage FIR and multi-stage FIR, but between single-stage FIR and multi-stage IIR. If you're interested in low latency then surely that's a valid reason to use cascaded IIRs.

Post

earlevel wrote:
mystran wrote:
Ivan_C wrote:The main goal of multi-stage oversampling is to reduce CPU load thanks to half-band filters properties and associated polyphase implementation.
Non-sense. The main goal of multi-stage oversampling is to avoid having to write poly-phase FIR code because you are too lazy to figure it out. Seriously, that's the only valid reason to use half-band cascades that I can think of.
Wait...are you saying there are no efficiencies to be had using multistage SRC, and the only reason is a lazier derivation? That's certainly not true.

The half-band component is a separate issue. There are situations where you can use a half-band filter and get an added efficiency, and plenty of times they won't work out. But that's just a bonus if it works, not the reason to used multistage.
Since a polyphase-FIR has (theoretically) a fixed cost per branch, any two-stage approach is more efficient if the cost (per branch) of the 2nd (higher-rate) stage is less than 1/N where N is the branch factor of the 1st stage. If you count raw ALU ops, then you can make this work, because you have tons of extra bandwidth available for a wide transition, so the second stage can use much shorter kernel. That's the theory anyway (and the same basic points apply for IIRs too, even though they typically end up being slower than FIR unless you can get away with one or two poles).

In practice, my personal experience is that this doesn't actually seem to work for conventional desktop CPUs. My best guess is that the cost of extra cache bandwidth and/or pipeline bottlenecks from serial dependencies is what offsets the ALU savings (eg. bouncing the first stage output through memory is typically already more expensive than just doing it all in one pass). I would imagine the pipeline problems are also the main reason why IIR half-band filters tend to be much slower (in comparison to FIR kernels) than predicted by their ALU cost.

If you can make it work, then go for it. Just make sure to benchmark it against a single-pass polyphase scheme to make sure you're faster in actual practice and not just in some theoretical realm of ALU op counts (which on conventional desktop CPUs in particular tends to be a really poor predictor of actual performance).

edit: I should also add that doing a 2x followed by Nx 2-stage approach for total of 2*N oversampling does get more attractive as N grows larger, but in this case you need the polyphase code for the 2nd stage anyway and in my personal experience it's not really worth it for typical oversampling factors in the 4-8x range (maybe it would start to get useful if you're going for 16x or higher).
Last edited by mystran on Sun Jan 14, 2018 12:01 am, edited 1 time in total.

Post

mystran wrote:
earlevel wrote:
mystran wrote:
Ivan_C wrote:The main goal of multi-stage oversampling is to reduce CPU load thanks to half-band filters properties and associated polyphase implementation.
Non-sense. The main goal of multi-stage oversampling is to avoid having to write poly-phase FIR code because you are too lazy to figure it out. Seriously, that's the only valid reason to use half-band cascades that I can think of.
Wait...are you saying there are no efficiencies to be had using multistage SRC, and the only reason is a lazier derivation? That's certainly not true...
...In practice, my personal experience is that this doesn't actually seem to work for conventional desktop CPUs. My best guess is that the cost of extra cache bandwidth and/or pipeline bottlenecks from serial dependencies is what offsets the ALU savings (eg. bouncing the first stage output through memory is typically already more expensive than just doing it all in one pass). I would imagine the pipeline problems are also the main reason why IIR half-band filters tend to be much slower (in comparison to FIR kernels) than predicted by their ALU cost...
OK, wasn't thinking CPUs—my implementations have been in DSP chips where it certainly wins to do multistage. I have the opportunity that requires SRC up and back natively on the CPU coming up, will be interesting to see, thanks.
My audio DSP blog: earlevel.com

Post

earlevel wrote:OK, wasn't thinking CPUs—my implementations have been in DSP chips where it certainly wins to do multistage. I have the opportunity that requires SRC up and back natively on the CPU coming up, will be interesting to see, thanks.
I must apologise for not being clear about this to be honest. Also if you see my edit, I do realise that this gets FAR more attractive if your oversampling rate is large (as would be the case for something like a DAC implementation) where you certainly don't want to pay the cost of the steep transition for every sample..

It's also worth keeping in mind that if you're working in fixed point, you can do additional tricks like using cheap (cascades of) boxcar filters for the bulk of the spectrum and fixing it up with pre-emphasis in the baseband, etc.. but the original post kinda suggested to me that we're probably not looking for this kind of stuff (and I apologise if this assumption was not correct).

Post

matt42 wrote:Has anyone here used a cepstrum to build minimum phase FIR? Is this a reasonable method?
what's the point of turning a linear phase FIR into a minimum phase FIR? i think, the point of using FIR in the first place is its linear phase characteristic. if you want minimum phase anyway, why not just use a minimum phase IIR design such as an elliptic filter (which i happily use for up- and downsampling since ever)
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post Reply

Return to “DSP and Plugin Development”