linear phase oversampling

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

mystran wrote:One quick note: if you're going for higher than 2x, there's not necessarily much point in doing cascades with FIRs, as you can design for more branches directly instead. This way you can also do 3x or 5x or 7x or whatever else you might find useful and you only need to run one pass (=convolve the input with each of the branches, then interleave the outputs for up-sampling, or convolve the input with the respective branch and sum for down-sampling).
hm...not getting it. I'm afraid the terms "branch", and "convolve" don't click with me in this context :( . Also i'm not very familiar with the concept of interleaving. MAybe you can point me to some text explaining it? (not the words, the concept).

Post

aciddose wrote:The whole layered system doesn't make much sense when you can do a single convolution instead.

The convolution may be more expensive, but if you integrate ahead of time (blep) the convolution is transformed into a simple addition.

You can also drop any samples with a delta lower than some limit... say you want to allow aliasing noise up to -80db, simply skip any delta lower than 1/10000.


See http://sourceforge.net/projects/protracker/ for a naive example of this in action.

BTW: somebody needs to contribute an accurate sallen-key filter to the project. Those I've been able to find online so far with claims they are sallen-key don't actually provide a sallen-key implementation, for example independent gain/cutoff/Q. The implementation itself should be super simple but I don't have the resources to do it myself.
So basically you are suggesting I should use one FIR for up-sampling to x8? With the phase non-linear filter you suggested? hmm...do you know that I want to oversample a complex signal? Not an oscillator or other "simple" sound source?
I may be talking out of my arm pit here. Albeit the concepts seem simple when I see the actual code in front of me, the terminology still confuses me so much :cry: . But its getting better 8) . I'll have to do more homework! I'll have a look at the code.

Post

Convolution is simply how FIR is done.
It's fairly simple to explain. It's an impulse response that travels over the waveform, multiplying and accumulating every sample in the impulse with the input signal sample, then shifts the input by one and does it all again. Or something like that :hihi:

Post

No I'm talking about decimation.

For up-sampling use a more efficient interpolation. Of course it is possible to construct a 1/8th filter and build your windowed sinc interpolation exactly this way, but some other existing methods in widespread use are far more efficient.

Decimation is the expensive part, up-sampling is trivial.

Now if you are up-sampling and only want to eliminate aliasing above nyquist, but do not care about aliasing below then this very same sinc filter as used in the protracker project is the most efficient possible method in existence.

For example up-sampling for a filter where zero-padding would otherwise make sense, this will give exactly the same effect minus nyquist reflected aliases.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

dasdeck wrote: hm...not getting it. I'm afraid the terms "branch", and "convolve" don't click with me in this context :( . Also i'm not very familiar with the concept of interleaving. MAybe you can point me to some text explaining it? (not the words, the concept).
Maybe this might help:

https://christianfloisand.wordpress.com/tag/fir-filter/

Has some pictures and generic code.

Post

aciddose wrote: Decimation is the expensive part, up-sampling is trivial.
I fail to see how there would be any kind of different between the cost of up-sampling vs. down-sampling. As far as I can see, every optimization that you can do with one of them, you can also apply to the other one. For linear processes you can obviously skip one of these (doesn't even matter which one), but for general non-linear stuff there's no huge short-cuts for either (and the importance of quality interpolation should not be under-estimated).

As for ZOH-resampling with BLEP-steps.. IMHO it's only worth it when dealing with varying rates... maybe non-integer case too, but for the simple integer-factor case it does nothing except forces scatter filtering.

Post

It is many times more efficient than layered poly-phase filters in both cases when you want to filter at a fixed frequency.

For up-sampling you can do the equivalent of zero-padding with impulses to produce reasonable results, but only for specific purposes.

ZOH-resampling with blep doesn't make sense at all, ever, unless you specifically want to include aliases below nyquist such as when emulating a variable rate DAC.

If you can produce something more efficient by all means please go right ahead.

I will predict that this is impossible.

When I say up-sampling is trivial I'm referring to the fact that a typical spline interpolation will easily out-preform most filters in terms of both complexity and cost.

Not the case for decimation.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote:It is many times more efficient than layered poly-phase filters in both cases when you want to filter at a fixed frequency.
Oh, I already mentioned above that I don't think layering makes any sense.
ZOH-resampling with blep doesn't make sense at all, ever, unless you specifically want to include aliases below nyquist such as when emulating a variable rate DAC.
I thought you were referring to that approach earlier when talking about pre-integration and BLEPs, but I guess you must have had something else in mind.

Anyway, beyond DAC emulation, it also works well for getting rid of most of the noise that would normally results when down-sampling from a time-varying samplerate. I'm sure there are other possibilities, but using ZOH BLEPs is the best approach I've found for this particular case (much cleaner than delta-impulses with post-integration pass, which is the next-best "cheap" approach I can think of); it should also be noted that sinc-resampling in this case is strictly speaking invalid (since it's the special case for fixed rates only).

Post

I was definitely talking about ZOH, but just that I wasn't making any generalizations. I thought I was fairly clear about it being most efficient for these particular sorts of cases, yes.

The generalization I will make is that I think while up-sampling you can very often satisfy the requirements for a particular configuration by using one of many well known interpolations rather than filters.

I might just not have a clue what you're talking about then - when I'm saying "layering" I'm referring to applying polyphase half-band filters in layers to get 1/8th or similar. This would of course require three layers.

Were you talking about poly-phase filters designed with some other cutoff, not half-band ?

Yes, I 100% agree that either up-sampling or down-sampling with a variable rate is always going to be a case specifically making the blep ZOH favorable to almost any other method.

In some cases it can be equally expensive to use a windowed sinc, especially if integration will take place regardless. In those cases blep ZOH can be slightly more expensive rather than less.

I'm not so sure that it is accurate to say sinc-resampling is "invalid", as the goal is to fit a signal into a target rate. In which case we want to insert band-limited impulses.

We can't reconstruct the original signal accurately because we have no assurance that it was correctly band-limited though, sure.

The result we get however... unless I'm missing something very significant here... should be an accurate sampling of the original signal, regardless of whether it went through a variable-rate intermediate step, so long as the intermediate rate never drops below the target sampling rate.

This is easy to prove - an analog signal is no different from a variable rate sampled signal. If we sample it at a variable rate, then apply a sinc filter to sample it at a fixed rate, this entire intermediate process I don't see any reason to consider any other way than as any sort of filter applied to the analog signal before it is sampled at the fixed rate.

Such a variable rate signal in fact should be considered to be analog as the sample periods might have infinite temporal resolution. There is no way to distinguish a variable-rate sampled signal once integrated from an analog signal.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote: Were you talking about poly-phase filters designed with some other cutoff, not half-band ?
I was trying to say: if you are going to use FIR filters, then I'd just go with a single N-branch Nth-band Nyquist (or similar) poly-phase filter, instead of trying to stack (or "layer") half-band filters. For IIR filters, going to 2x first might make sense, since this buys you a very wide transition band for the next stage going to higher rate, but for FIR I just don't see much point. YMMV.
I'm not so sure that it is accurate to say sinc-resampling is "invalid", as the goal is to fit a signal into a target rate. In which case we want to insert band-limited impulses.
Basically if the rates are not fixed, then in terms of the time-warped fixed-rate signal the sampling is irregular. In this case simply replacing each sample with a sinc-impulse is no longer correct, nor do the samples align with the zero-crossings of the nearby sincs, as the irregularity of the sampling increases, so does the error, so in practice for slow smooth modulation the errors are not huge, but for truly random sampling you get a lot of noise.

At some point I tried to look at the math involved in perfectly reconstructing such irregularly sampled signals, and it got a bit too dense a bit too fast, so can't really help much there.. I just went with the ZOH-BLEP approximation that works well enough in most reasonable practical cases.

Post

Another possibility is using linear-phase FIR for 2x, then elliptic or similar IIR for higher rates; this won't be truly linear phase, but the group delay in the actual pass-band can stay pretty flat as the linear-phase FIR takes care of the steep transition.

Post

aciddose wrote: When I say up-sampling is trivial I'm referring to the fact that a typical spline interpolation will easily out-preform most filters in terms of both complexity and cost.
Polynomial interpolators are polyphase FIR, just with infinite branches. IE One is continuous and one is discrete, but they are the same thing. You can rearrange a polynomial interpolator into the form

f1(x)*smpl[0] + f2(x)*smpl[1] + f3(x)*smpl[2] ...

If you are upsampling by a fixed ratio, then you can precalculate all the fn(x) and just do the FIR directly.

The only reason to use a polynomial / spline intepolator is if you need to evaluate at arbitrary sub sample positions.
Chris Jones
www.sonigen.com

Post

I was referring to IIR filters.

Also, while you can precalculate f#(), the amount of memory required, the interpolation of the interpolation (?!) and the questions of number of samples taken, trade-offs and so on balanced with the processing cost of the whole system in many cases is actually beaten by simply calculating the polynomials in the first place and not bothering with all this other ridiculously complicated nonsense.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote:I was referring to IIR filters.
OK. I woulnt use IIR filters for up/downsampling in any case tbh.
Also, while you can precalculate f#(), the amount of memory required, the interpolation of the interpolation (?!) and the questions of number of samples taken, trade-offs and so on balanced with the processing cost of the whole system in many cases is actually beaten by simply calculating the polynomials in the first place and not bothering with all this other ridiculously complicated nonsense.
For a 4 point interpolator at 8x oversampling it's what 128 bytes? No big deal. Less complicated than a Hermite interpolator, and 1/4 of the arithmetic ops.

(I personaly wouldn't use such an FIR, but we're just talking about making the equivalent of a polynomial interpolator)
Chris Jones
www.sonigen.com

Post

Yes that is a big deal, especially if you want your code to be maintainable.

If you bothered to use templates and write the code in a reasonable format, the compiler does these optimizations for you.

Also, before you jump into optimizing what need not be optimized for your 1% savings, benchmark it.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post Reply

Return to “DSP and Plugin Development”