Why does linear interpolation produce aliasing?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hi davidspry,

In my experience, linear interpolation works well for modulated delay effects such as flanger, vibrato, etc. as long as the modulation frequency is a few Hz. Based on this and your posts here, I would suspect that the problem is that you have some high-frequency components in your modulation signal and that what you are hearing is distortion. The most likely suspects are arithmetic errors or jumps in your index arithmetic, some kind of wraparound problem, and most suspicious of all - to me anyway - your calculation of the "target" variable. Have you examined all of these? You may find it useful to plot or print out your read position and maybe even look at its spectra for different settings. Because you found that the problem exists for very high sample rates, I think you probably have a problem other than the interpolation scheme.

Regards,
Dave Clark

Post

mda wrote: Fri Jul 03, 2020 6:44 am Adding to earlevel's great explanation, when your delay time is not changing or only changing slowly, the interpolation becomes a static FIR filter, so there's no aliasing just a high frequency roll-off depending on the fractional position. It would be possible to compensate for the roll-off, but a better interpolator (and/or oversampling) will also result in a flatter frequency response, so that's the way to go.
To add some intuitive explanation to this:

If you imagine a sampled sine wave, then linear interpolating at the k=50% point midway between samples will always give a result which is closer to zero that the true interpolated value should be. However a linear interpolation nearer to the k=0% start or k=100% end of the gap between samples will give a result closer to the original signal and so with less attenuation. This attenuation is more marked as the frequency of the signal increases, so in effect you have a high-frequency roll-off which depends on the interpolation position k. The closer k is to 50%, the more the HF will be attenuated.

If you keep the k value constant then you will have a constant attenuation, which will not sound noticeably unpleasant (although it will have a high-frequency roll-off depending on k). However when you sweep your delay time you will potentially have a different k value for each sample, which will result in a modulation effect on the HF component of the interpolated waveform - for those samples where k is closer to 50% the HF components of the waveform will be badly attenuated, whereas if k is nearer to 0% or 100% then the waveform will be closer to the original.

This will most likely sound as if the HF component of the waveform is being modulated by a noise signal, (or at best by a kind of sawtooth if the changes in delay time are small enough) which I think you are describing as aliasing.

You could verify this by testing on waveforms which contain more or less HF component.

Post

I'm a little late to the party but I have implemented Cubic Interpolation from Paul Bourke's page (Paul Breeuwsma "Catmull-Rom splines" version towards the middle of the page) for sample-rate conversion, vibrato, ect in all of my plugins and it sounds great to me!

http://paulbourke.net/miscellaneous/interpolation/


This is probably what you want ;) Don't bother with Cosine - it sounds better than Linear but not by much.

Post

@adammonroe:

My excuses for bumping up this topic once more, but what's a little unclear to me is that Catmull-Rom requires 4 points whereas all I'm, in the end, looking for, is interpolation between 2 points. So my question is, what would be a sensible way to calculate the other 2? At equidistant spaces *between* the 'from' and 'to' for example?

Thank you,
- N

Post

superplek wrote: Thu Aug 06, 2020 11:45 am @adammonroe:

My excuses for bumping up this topic once more, but what's a little unclear to me is that Catmull-Rom requires 4 points whereas all I'm, in the end, looking for, is interpolation between 2 points. So my question is, what would be a sensible way to calculate the other 2? At equidistant spaces *between* the 'from' and 'to' for example?

Thank you,
- N
Not sure I follow you correctly. You want to interpolate 2 extra points so you can use a 4 point interpolator?

Post

Maybe jumping in before the topic. But if you look at it like a filter happening on the frequency domain you want a filter to be acting on smoothing out the stepped signal. This filter would be a liniar one, right. So it would be like comparing a sine with no extra spectral component, to a triangle with it's odd harmonics. I personally like a filter tuned at 2Pi*F (and to get it at 0db the square root of 1.5) so F* 2pi* (Sqrt 1.5) but that is beause I can "and maybe will" cach the frequency the signal is happening at, and thus can gauge the frequency at which there is a need for interpolation.

"Edit" It doesnt really produce aliassing, as all digital does. But "odd" harmonic distortion.
And it doesnt sound that bad, it rather sounds kinda good.. With a triangle the 3rd harmonic is kind of there but not in your face.

Post

superplek wrote: Thu Aug 06, 2020 11:45 am what's a little unclear to me is that Catmull-Rom requires 4 points whereas all I'm, in the end, looking for, is interpolation between 2 points.
The 4 points are four consecutive "original uninterpolated" samples from your source signal. Two samples before the fractional interpolation position and two samples after the fractional interpolation position.

Sometimes the four samples are named something like s[-1], s[0], s[1] and s[2]. And the interpolated fraction is somewhere between s[0] and s[1]. For example if the fraction = 0.8 then it would interpolate 4/5 the way between s[0] and s[1]. But other naming schemes may be encountered.

Linear interp only uses two samples, one before and one after the fractional interpolation position.

The four point considers an extra sample on both sides of the fractional interp position. So in practice your code needs to remember the previous two source samples and also needs to know two future source samples. Causing the output to have a tiny delay waiting on future input samples before it can generate current output samples "a tiny bit late".

Those kind of interpolators act like spline curves in draw graphics programs. It draws a squiggly curve that passes thru all 4 points. The two samples on the outside act like draw program spline curve control handles. The shape of the curved line segment between the two inner samples depends on the values of the two outer "control handle" samples.

For example if you decided to interpolate 100 fractional values between two inner samples and plot it in a wave display, plotting fraction s[0.0], s[0.01], s[0.02,] ... up to fractional location s[0.99], it would draw a curve whose shape depends on the value of the two outside samples s[-1] and s[2].

By comparison of course a 2 pt linear interp would always draw the same straight line regardless of the value of farther-away surrounding samples.

Post

JCJR wrote: Thu Aug 06, 2020 9:36 pm
superplek wrote: Thu Aug 06, 2020 11:45 am what's a little unclear to me is that Catmull-Rom requires 4 points whereas all I'm, in the end, looking for, is interpolation between 2 points.
The 4 points are four consecutive "original uninterpolated" samples from your source signal. Two samples before the fractional interpolation position and two samples after the fractional interpolation position.

Sometimes the four samples are named something like s[-1], s[0], s[1] and s[2]. And the interpolated fraction is somewhere between s[0] and s[1]. For example if the fraction = 0.8 then it would interpolate 4/5 the way between s[0] and s[1]. But other naming schemes may be encountered.

Linear interp only uses two samples, one before and one after the fractional interpolation position.

The four point considers an extra sample on both sides of the fractional interp position. So in practice your code needs to remember the previous two source samples and also needs to know two future source samples. Causing the output to have a tiny delay waiting on future input samples before it can generate current output samples "a tiny bit late".

Those kind of interpolators act like spline curves in draw graphics programs. It draws a squiggly curve that passes thru all 4 points. The two samples on the outside act like draw program spline curve control handles. The shape of the curved line segment between the two inner samples depends on the values of the two outer "control handle" samples.

For example if you decided to interpolate 100 fractional values between two inner samples and plot it in a wave display, plotting fraction s[0.0], s[0.01], s[0.02,] ... up to fractional location s[0.99], it would draw a curve whose shape depends on the value of the two outside samples s[-1] and s[2].

By comparison of course a 2 pt linear interp would always draw the same straight line regardless of the value of farther-away surrounding samples.
Thank you, this has been very useful!

Post

So, if I have a plugin, that for specific reasons, I want to code internally at 44.1k, but then I want to resample its output to whatever the DAW is requiring...what are some reasonable techniques to handle this? Most of the resample code that I've seen deals with a stream, while plugins are dealing with sometimes oddly sized buffers, so I can't really know what the next few samples of the NEXT buffer would be.

Post

joshb wrote: Fri Aug 07, 2020 10:59 pm So, if I have a plugin, that for specific reasons, I want to code internally at 44.1k, but then I want to resample its output to whatever the DAW is requiring...what are some reasonable techniques to handle this? Most of the resample code that I've seen deals with a stream, while plugins are dealing with sometimes oddly sized buffers, so I can't really know what the next few samples of the NEXT buffer would be.
tl;dr: each process has its own buffer

Think about it this way. If you were doing a simple process, like gain, you'd just have a big FOR loop that pulls an indexed sample out of the input buffer, processes it, places it indexed into the output buffer.

For a gain plugin, "processes it" is simple. For processes that requires more history (more input samples to produce one output sample), those processes manage their own history requirements. In other words, they have their own buffers as needed. Let's say the first stage is an 8x upsampler (followed by, say, a clipper, and an 8x downsampler—but we can ignore this).

Let's say we have low standards :lol:, and our upsampler is a one-stage design that needs a 39 sample input buffer to run a windowed sinc FIR of that length (we initialize the buffer with zeros). The process takes in one sample (via the FOR loop), adds it to its input buffer, does its thing—the FOR loop steps through all the samples from the host buffer in this way, till done (at the other end of the FOR loop, the downsampler process supplyies one sample to the host output buffer).

Note that it does not matter at all in this explanation whether the host buffer is 32 samples (smaller than the 39-sample buffer owned and managed by the upsampler), or 64 samples (bigger).
My audio DSP blog: earlevel.com

Post

Thanks, @earlevel, but either you didn't understand my use, or (more likely), I didn't understand your reply.

I'm talking about an instrument plugin that generates audio internally at 44.1k. But since it's a plugin, it will be receiving arbitrarily sized buffers to fill at an arbitrary output sample rate. If it was a single stream and not in blocks, I could code a Catmull-Rom function to resample it all, no problem. But, since it's in blocks, I don't know what to do with the edges of the blocks. And, it could also be upsampling to arbitrary rates (48k, 88.2k, 96k, etc), or (less likely) downsampling to whatever.

Post

joshb wrote: Sat Aug 08, 2020 6:00 pm Thanks, @earlevel, but either you didn't understand my use, or (more likely), I didn't understand your reply.

I'm talking about an instrument plugin that generates audio internally at 44.1k. But since it's a plugin, it will be receiving arbitrarily sized buffers to fill at an arbitrary output sample rate. If it was a single stream and not in blocks, I could code a Catmull-Rom function to resample it all, no problem. But, since it's in blocks, I don't know what to do with the edges of the blocks. And, it could also be upsampling to arbitrary rates (48k, 88.2k, 96k, etc), or (less likely) downsampling to whatever.
OK, I think you didn't understand my reply. The edges of the blocks don't matter, because your process buffers what it needs. That adds latency, of course. Fundamentally, this is the same way you'd code a delay line. Say you want to have a delay that's 100 ms, at 48k. That's 4800 samples. You can't just reach back in the incoming host buffers, you have edge issues, plus the problem that the buffers are probably not anywhere near that many samples. No, problem, your delay process has its own circular buffer to hold at least 4800 samples. It doesn't matter whether the host is giving you one sample at a time or 512 from which you pluck one sample at a time.
My audio DSP blog: earlevel.com

Post Reply

Return to “DSP and Plugin Development”