Arbitrary ratio resampling

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

matt42 wrote:
JCJR wrote:I agree with matt42, unless you need something quick'n'dirty and don't have a lot of time to experiment with stuff, it may be worth learning "good quality generic but possibly slow" methods to have in the toolshed when needed. Even if you decide to use something cheaper.
Indeed, but even if you decide to go cheap, it still doesn't make sense to use a polynomial interpolator at fixed ratio. If you really want the hermite kernel then you could just derive the coefficients at the fixed points and dispense with the polynomial calculations. Or just design a short FIR (I suppose every windowed sinc FIR is actually a look-up table of a windowed sinc interpolator from this perspective).
For cubics (whether hermite/lagrange/b-spline) precalculating the kernels might not win you anything unless you can process with large enough blocks, as it can cause some extra cache misses and you don't need that many before it negates any performance gains (eg. you typically expect to calculate a kernel or two in the time it takes to resolve a cache miss and even if it you get an L1 hit it still might not be much faster).

Post

mystran wrote:For cubics (whether hermite/lagrange/b-spline) precalculating the kernels might not win you anything unless you can process with large enough blocks, as it can cause some extra cache misses and you don't need that many before it negates any performance gains (eg. you typically expect to calculate a kernel or two in the time it takes to resolve a cache miss and even if it you get an L1 hit it still might not be much faster).
Yes, I probably wasn't very clear. I was talking about fixed integer ratio resampling, so we don't need some giant table for a short kernel it's just a case of storing a few filter coefficients vs a few polynomial coefficients. (was referring to the x4 up/down sampling segments of the scheme)

In the case of fractional resampling I agree. It would be bloaty overkill to generate a massive table for a tiny kernel and would, as you say, increase the chance of cache misses etc

Edit:

With regard to using a polynomial interpolator for fixed integer ratio resampling I feel like it doesn't seem like an actual design choice. More like stumbling on an interpolator and assuming it's a good choice :shrug: (perhaps I'm projecting. I actually made that mistake 10 years or so ago :clown: )

Post

Well, polynomial interpolators are great for many things (eg. you probably shouldn't be using sinc-interpolation for your control signals), just not for audio resampling.

Post

mystran wrote:
JCJR wrote:I agree with matt42, unless you need something quick'n'dirty and don't have a lot of time to experiment with stuff, it may be worth learning "good quality generic but possibly slow" methods to have in the toolshed when needed. Even if you decide to use something cheaper.
This quote highlights the main misconception about sinc-interpolation when it comes to resampling: the actual processing cost is likely to be less than what you would spend on your quick&dirty method.

In terms of madds, you expect a fixed 2x FIR-oversampling scheme to spend about twice the multiply adds compared to doing it one-pass with a fractional FIR (assuming linear kernel interpolation, with doubles the number of madds). If you go for 4x, the figure is 4 times as many madds. What this means in practice is that even if your fractional FIR is "possibly slow" it's STILL going to be faster than your "quick&dirty" method.
Thanks mystran

In my current state of ignorance I would be inclined to guess that what you say is probably correct. At the very least it would seem a good idea to have a good durable fractional FIR object in the toolkit to use as a comparative benchmark if nothing else. And as you say, at the end of the day maybe the fractional FIR turns out the simplest/fastest executing solution.

I was a slow worker and usually under time constraints. Entering something I'd never done before, a preferred outcome was "finished code that works" rather than "unfinished code that might have worked even better if it had got finished in time." :)

Tried to be somewhat scientific estimating between options of which I was equally ignorant, but some of it seemed unavoidably "wild guessing" and at the time all those CPU ops in an oversampled fractional FIR didn't look encouraging.

Well, another thing influenced thinking at the time-- Seemed fairly easy to speed up any algorithm that could be boiled down into an asm tight loop that can keep all vars in the cpu int and float registers, and rip thru a buffer only reading/writing samples. Algorithms "too big" to break down into little fast tight loops I got less enthusiastic about. But that kind of thinking may be the CPU-cycle-miser's equivalent of "penny wise and pound foolish". :)

Post Reply

Return to “DSP and Plugin Development”