Designing sinc interpolator for non-integer ratio down sampling of band limited signal

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

matt42 wrote:
mystran wrote:It can be aliasing or it can be noise from the kernel interpolation.
This is a big part of my question. Other than aliasing, due to poor bandlimiting, can a kernal cause distortions and how? Take linear interpolation; it's just a triangular kernel. Sure it doesn't bandlimit well and will filter the pass band, but would it add distortion to source signal (assuming source is band limited to target nyquist) if we use it to down sample to a target sample rate?
Like I noted above, you can easily test this: feed it a band-limited saw-wave with slow(!) frequency sweep and look at the analyzer. If you see partials moving in different directions, you have aliasing (ie. the interpolation kernel is not good enough), if it's just a "general mess" then it's probably another issue. In the latter case, the kernel response is likely varying too much depending on the sub-sample position. This could happen if your kernel doesn't have enough taps (per branch) to maintain a good response when shifted (though I don't see this as a huge problem with any practical kernel length), or it could be because you don't have enough branches and/or your interpolation for "in between" values is poor and leads to noise (see below)... or it could even be a simple programming error, who knows.

What I mean by varying response is that when you have something like a triangular kernel, when you sample it at the middle to get get [..,0,1,0,...] the response is completely flat at unity. Then when the center falls in the middle of two samples, you get [..,0,0.5,0.5,0,..] which places a zero at Nyquist (ie. the response is very gradual low-pass that goes all the way to zero). In between these two extremes you get something in between. If you then sample "randomly" at different points, you get noise as the response varies. This noise is at the high frequencies (where the variation happens), so such a triangular filter is only really a good interpolator at low frequencies (for which increasing the resolution or "oversampling" helps).

Ultimately though, it's impossible to say what your problem might be. You simply need to try varying one parameter at a time, look at an analyzer with a test signal and see what happens. It's worth noting that this type of stuff tends to be really sensitive to details, so it's relatively easy to also make small oversights in code that end up nudging some value somewhere slightly, resulting in seemingly "working" but slightly noisy situation. You could even have some half-sample alignment issue with your windowed sinc-design that could cause sub-optimal performance (been there, done that)... so unfortunately "it's complicated" :)

Post

xoxos wrote:
matt42 wrote:What would I be asking for otherwise?
irony. but i'm glad to hear the part about the person being wrong is still true.
You're referring to interpolation only referring to fractional interpolation? I guess you think you're right on that. Well, as much as I'd love to go down that path with you (not really, is that irony or sarcasm?) I have a problem I'd like help on, without drunken hecklers yelling in fringe/wrong/irrelevant nonsense. You may as well be the bug induced noise in my interpolator. Please reply with some actual theory, or practice on designing a clean interpolator.

Post

mystran wrote:What I mean by varying response is that when you have something like a triangular kernel, when you sample it at the middle to get get [..,0,1,0,...] the response is completely flat at unity. Then when the center falls in the middle of two samples, you get [..,0,0.5,0.5,0,..] which places a zero at Nyquist (ie. the response is very gradual low-pass that goes all the way to zero). In between these two extremes you get something in between. If you then sample "randomly" at different points, you get noise as the response varies. This noise is at the high frequencies (where the variation happens), so such a triangular filter is only really a good interpolator at low frequencies (for which increasing the resolution or "oversampling" helps).
Yes, this is what I'm getting at. I had discarded this initial intuition for some reason and most of the early replies to this thread weren't helping. They were reinforcing the idea that band limiting was the only issue. So this points me back in the direction that the only way to arrive at a suitable interpolator will be through testing, (with some theory to point in the right direction).
mystran wrote:Ultimately though, it's impossible to say what your problem might be. You simply need to try varying one parameter at a time, look at an analyzer with a test signal and see what happens. It's worth noting that this type of stuff tends to be really sensitive to details, so it's relatively easy to also make small oversights in code that end up nudging some value somewhere slightly, resulting in seemingly "working" but slightly noisy situation. You could even have some half-sample alignment issue with your windowed sinc-design that could cause sub-optimal performance (been there, done that)... so unfortunately "it's complicated" :)
Yep, I don't expect you to debug my code by magic! But tips on possible tests and confirming that even a band limited interpolation can cause noise is the kind of input I was looking for.

Post

Interpolation causes aliasing, even sinc interpolation. The reason I suggested that maybe your sines weren't bandlimited is because you said that you were using wavetables to create your sines. Wavetable + interpolation (especially linear) = aliasing. That's why I was suggesting that maybe you aren't as bandlimited as you thought you were and that maybe your decimator isn't steep/narrow enough. I suggest this because it's happened to me and it took me a while to figure that out.

I managed to miss the part in the title where you specifically say that you're designing a sinc interpolator. My bad, of course polynomials aren't what you're looking for. I suggest you do what you're probably going to do anyways, use a spectrogram to make sure that you have a long enough kernel that it isn't aliasing.

The suggestion to try a Kaiser window with a high beta is a solid one, from my perspective. I suggest 40+ taps with a beta > 7, that seems to be a reasonable place to start (assuming that you're resampling f0/1.8 ). Beyond that, Blackman-Harris has low spectral leakage, that might work for you as well.

Post

TLDR:

Here's an excellent PDF containing info/specifications and ready to run code for high quality interpolation algorithms. You can use them for your resampling interpolation:

http://yehar.com/blog/wp-content/upload ... 8/deip.pdf

Just check the end part of the PDF. There's a table with SNR and CPU cost of those different algorithms.

Also note that those interpolation algorithms assume you already have an oversampled signal, which you do in your case if I understood you correctly (bandlimited to much lower than Nyquist).
Misspellers of the world, unit!
https://soundcloud.com/aflecht

Post

matt42 wrote:I have a problem I'd like help on, without drunken hecklers yelling in fringe/wrong/irrelevant nonsense. You may as well be the bug induced noise in my interpolator.
when i think surely someone with experience can assist with a dsp issue, i often think,

i know, let's go to kvr where i can receive some abuse from an authoritarian.


fascinating concepts about the absolute lack of benefit from such activity, glad to see you exploring them, dickhead.
you come and go, you come and go. amitabha neither a follower nor a leader be tagore "where roads are made i lose my way" where there is certainty, consideration is absent.

Post

xoxos wrote:
matt42 wrote:I have a problem I'd like help on, without drunken hecklers yelling in fringe/wrong/irrelevant nonsense. You may as well be the bug induced noise in my interpolator.
when i think surely someone with experience can assist with a dsp issue, i often think,

i know, let's go to kvr where i can receive some abuse from an authoritarian.


fascinating concepts about the absolute lack of benefit from such activity, glad to see you exploring them, dickhead.
:lol:

Post

sault wrote:Interpolation causes aliasing, even sinc interpolation. The reason I suggested that maybe your sines weren't bandlimited is because you said that you were using wavetables to create your sines. Wavetable + interpolation (especially linear) = aliasing. That's why I was suggesting that maybe you aren't as bandlimited as you thought you were and that maybe your decimator isn't steep/narrow enough. I suggest this because it's happened to me and it took me a while to figure that out.
I see. I didn't fully explain the method. The sines, of course, have to be below target nyquist and the period also needs to be an integer ratio of the table length.
sault wrote:I managed to miss the part in the title where you specifically say that you're designing a sinc interpolator. My bad, of course polynomials aren't what you're looking for. I suggest you do what you're probably going to do anyways, use a spectrogram to make sure that you have a long enough kernel that it isn't aliasing.
Well, I did ask about approximating the sinc with piecewise polynomials so that might have put you off. And, yes, I guess that's how I'll have to approach it - bit of guided trail and error with a spectrogram.
sault wrote:The suggestion to try a Kaiser window with a high beta is a solid one, from my perspective. I suggest 40+ taps with a beta > 7, that seems to be a reasonable place to start (assuming that you're resampling f0/1.8 ). Beyond that, Blackman-Harris has low spectral leakage, that might work for you as well.
Yeah, I hadn't thought outside the box to design a Kaiser with arbitrary m and Beta. I'll give that a shot. I'm using Blackman-Nuttall and IIRC the results are close to Blackman-Harris, at least when I was testing FIR filters.

Thanks, Kraku. That's a great paper - it actually came up earlier in the thread

Post

matt42 wrote:Yeah, I hadn't thought outside the box to design a Kaiser with arbitrary m and Beta. I'll give that a shot. I'm using Blackman-Nuttall and IIRC the results are close to Blackman-Harris, at least when I was testing FIR filters.
Blackman-Nuttall should actually get you to somewhere around 100dB attenuation, so if you see significantly less, the problem is probably elsewhere. That said, Kaiser window is particularly nice (especially for testing) specifically because you can simply keep bumping the quality higher to stress test the other aspects of your implementation (up to a point anyway; you'll run into numerical limits with the modified Bessel function I0 if you push it too high and for this and similar reasons I suggest double precision arithmetic for any and all kernel calculations even if your final processing is done in singles).

Post

Thanks, so the main problem, regarding kaiser with high beta, is numerical issues deriving the coefficients. I guess I could use something with crazy precision if required (the lol::real type comes to mind). Thanks again for all the helpful insights on this.

Post

matt42 wrote:Thanks, so the main problem, regarding kaiser with high beta, is numerical issues deriving the coefficients. I guess I could use something with crazy precision if required (the lol::real type comes to mind). Thanks again for all the helpful insights on this.
Oh, I'd like to clarify that it isn't really a practical concern for practically interesting designs (at least not when done with doubles), you'll hit noise floor trying to measure the response with an FFT first (with said FFT in double precision as well). It's just that if you THEN throw another factor of 2 on it (or something along those lines), it might blow up. So basically I just wanted to warn you that you don't need to panic if this happens. :)

Post

Thanks, I've got something that'll work for now. Next step is build the resampler and then later optimise the interpolation algorithm and play around to try to reduce the taps.

Post

Re the noise issue-- Am woefully ignorant of the topic as usual. Was earlier wondering what practical effect would result from "slight math errors" in each interpolated sample? Especially with smaller kernels. Would it result in noise? Or aliasing? Some other effect?

For instance cheap polyphase 4X upsampling, maybe three sets of 8 point FIR coffs. One set of coffs to calculate each 25% interpolated sample, a set of coffs to calc each 50% interpolated sample, and a third set of coffs to calc each 75% interpolated sample.

Considering only 8 samples on each side of the target interpolation point, even with double math it seems that each interpolated point wouldn't likely be the EXACT value of a perfect band-limited interpolation. For instance using 16 or 32 samples on each side of the target interpolated point ought to yield more-exact values for each interpolated sample, and of course using less than 8 samples on each side of the target interpolation should yield less-exact values.

So especially with small kernels, wouldn't there be slight "random-spread" overshoots and undershoots for each interpolated sample? Maybe the error distribution would not be "truly random", but there ought to be some kind of slight error distribution on the interpolated samples?

Would that act like adding noise to the signal?

Post

When my uneducated eyes gaze over deip.pdf, I see polynomials and these do not seem to be motivated by the sampling theorem, so whether it's noise or aliasing or both doesn't seem to matter, these formulas are just appoximations and can do both. Just a thought.
~stratum~

Post

stratum wrote:When my uneducated eyes gaze over deip.pdf, I see polynomials and these do not seem to be motivated by the sampling theorem, so whether it's noise or aliasing or both doesn't seem to matter, these formulas are just appoximations and can do both. Just a thought.
I think they are supposed to approximate a windowed sinc function. A quick look at the impulse response diagrams and they mostly look like windowed sinc functions (at least superficially). Although they only seem to cover the main central peak and maybe a couple of zero crossings. (edit: though less so in the optimal designs)

Edit:
One thing that's confusing is
The effects of the oversampling, pre-emphasis and pinking can be combined. We shall call the frequency responses obtained this way modified frequency responses.
Then the modified SNR is derived from the peak value of the "first six stop bands" (side lobes?) (this was used in a Differential Evolution algorithm to determine the coefficients of the optimal designs), so the SNR of the interpolators is not a real world value as the interpolation noise itself isn't taken into account. I guess for quality results these interpolators would need testing for both passband response and noise levels before use.

Post Reply

Return to “DSP and Plugin Development”