Polynomial | Spline Interpolation for smoothing out sliders in JUCE++?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hey everyone! I’m currently enrolled in a numeric analysis class, and my peers and I got plenty of freedom to do a final assignment involving one of the numeric methods taught in the lectures.

I started studying JUCE’s tutorials and figuring out the basics of DSP, and I eventually found out that I could maybe showcase some usage of interpolation in removing the artifacting that you get when moving sliders in discrete intervals. As it stands, I spent the better part of this week desperately attempting to code my own linear interpolation function for a simple sine generator - and failed miserably. The results are always something in the lines of incredibly garbled audio, infinitely growing frequency values and whatnot.

I have a sine wave generator, where the user controls the frequency of each wave and the volume in db, cobbled together from a tutorial that didn’t illustrate how to smoothly interpolate values between slider changes. I added an interpolate button that toggles my attempts at smoothing out the value transitions. As this is an experiment on numeric methods, I have to showcase something from my course material. I can use Spline Interpolation, Polynomial Interpolation , Finite Differences, and linear interpolation (Not sure if I can do linear though, kinda risky).

It’s from the basic Audio Application template, standard MainComponent.cpp and MainComponent.cpp files.

I think my main problem here is most definitely what kind of points I am supposed to interpolate. I put my main doubts in the interpolate() crapfunction.

I have four possible options here:
(1) Fix my crappy interpolation attempt - I can’t for the life of me figure out what exactly to use as input for the function.
(2) Understand the SmoothedValue or the LinearSmoothedValue JUCE class - I lifted the former from the kvraudio forums (viewtopic.php?p=6548432) in hopes of finding something that worked so that I could create my own solution based on it, but I couldn’t find out how to get it to work. Same goes for the LinearSmoothedValue. Couldn’t get it to work.
(3) Attempt spline | polynomial interpolating, but I don't know what points (x0,y0) (x1,y1) to use as input for the function.
(4) Give up and cry really hard in a corner somewhere.

Ideally, a function or class that could be used in the getNextAudioButton() for every sample would work best. But I'm not really sure if that would be the best spot to call such a function.

I added plenty of commentary in my code to possibly guide to the root of my problems. Right now I am currently very desperate and I have pretty much given up hope of ever removing the artifacting, and I would greatly appreciate any help given. (the main issues are around line 156.)

Codeshare:
https://codeshare.io/aVjRRb5

Post

u could use a lp filter?
-maybe that doesn't work in the paper requisites though

here is a smooth I wrote at some point->
Image
Image

Post

The problem with interpolating is that interpolation is defined as a function blending between two points with various arguments (control points, etc.)

So ultimately no matter what sort of interpolation you're wanting to use, it reduces eventually to linear interpolation ("lerp") and then to nearest-neighbor interpolation (what to call that? "norp" ? :hihi:)

Code: Select all

lerp(x, y, z) = x * (1 - z) + y * z
norp(x, y, z) = x // eff off y and z arguments! YOU'RE USELESS!
So the key here is that x and y are arbitrary values between which you are interpolating, while z represents the fractional point (0 to 1) between those two values.

It should be obvious that lerp(x, y, z) is a straight line while other polynomial interpolators are various Nth order curves.

So you should attempt to get things working right using norp() first (do we need to interpolate? norp! :smug:)

Then you need to think about which two points are you interpolating between?

A good example is that you probably want to take the previous input value as "x", with the new input value as "y" and you want to interpolate from "x" now (z = 0) to "y" at some point in the future (z = 1).

Therefore you need to set "some point in the future" to a specific value such as 100 ms, then compute z from the time elapsed since the input event and limit it "dim()"/"limit(x, min, max)" between and including 0 to 1. The argument "z" is therefore a fraction representing the position between the two values.

For example:

Code: Select all

interpolation_length = sample_rate * 0.100;
fraction = limit((current_sample - last_input) / interpolation_length, 0, 1);
actual_value = lerp(last_value, new_value, fraction);
You can then utilize control points for higher order polynomial interpolators as desired, generally in the form of spline(w, x, y, z, fraction) if you can come up with a rational source for those control points.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote: So ultimately no matter what sort of interpolation you're wanting to use, it reduces eventually to linear interpolation
There's usually a variable in there somewhere that requires a linear input? :roll:

Post

No, what I mean is you are always interpolating between two points. That's what linear interpolation is, a straight line drawn between two points.

The problem is that I suspect the OP is interpolating between one point and who knows what.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote:No, what I mean is you are always interpolating between two points. That's what linear interpolation is, a straight line drawn between two points.

The problem is that I suspect the OP is interpolating between one point and who knows what.
Well, there are plenty of interpolation schemes not guaranteed to intersect exactly with two points specified in time, or even ever. Even if they all did your logic wouldn't be correct (they aren't all straight lines).

But you may well be correct that the problem is incorrectly specified points, or just a faulty implantation.

Post

"Inter" = between.

Can you be between one thing? Or nothing?

If you break it down the lowest level, of "between", is two points.

I don't get what there is to argue here because if this doesn't come intuitively then I don't know how you manage to succeed at anything.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Yes, I'm sure we all agree what interpolation means and I'm not really interested in arguing about it.

You're argument seems to boil down to all interlpolators interpolate. And I agree. I also agree that the most likey error is implementation error. Another possibility is some algorithms just don't interpolate nicely.

I'm on the phone right now, so haven't looked at the actual source code yet

Post

nix808 wrote:u could use a lp filter?
-maybe that doesn't work in the paper requisites though

here is a smooth I wrote at some point->
Image
Image

Thanks! But yeah, it unfortunately doesn't fit in the requirements. I will keep this saved for future personal projects though :)

Post

Oh, I somehow triggered the spam filter. Can't get my answer to post, jeez. I guess I need something like 5 posts to not fall in it.

Post

aciddose wrote:
Then you need to think about which two points are you interpolating between?

A good example is that you probably want to take the previous input value as "x", with the new input value as "y" and you want to interpolate from "x" now (z = 0) to "y" at some point in the future (z = 1).

Therefore you need to set "some point in the future" to a specific value such as 100 ms, then compute z from the time elapsed since the input event and limit it "dim()"/"limit(x, min, max)" between and including 0 to 1. The argument "z" is therefore a fraction representing the position between the two values.

For example:

Code: Select all

interpolation_length = sample_rate * 0.100;
fraction = limit((current_sample - last_input) / interpolation_length, 0, 1);
actual_value = lerp(last_value, new_value, fraction);
Hey, thanks for the swift excellent answer! I think this is exactly the kind of thinking I was looking for. I guess I'm still very much a newcomer on the subject, as I can't quite grasp a solution yet. But considering you haven't looked at the code yet, here's what I have so far:
  • I have a time variable that is incremented by a deltaTime of 1/sampleRate for every sample processed
  • In the prepareToPlay() function, the slider values are equal to the default values.
  • When the user drags the slider in the sliderValueChanged() function, it sets the dragged value to targetValue.
  • And then for every sample processed, if the targetValue differs from the current value, I call interpolate (value, targetValue, numSamples, sample).
I'm only testing the frequency slider so you can find frequency and targetFrequency values using this approach. But now it comes down to what you've said - to think about points that I am interpolating between. I've always had a very frustrating time with units, keeping time measurement and whatnot in algorithm design and it's no exception here, it's kicking my ass :(

When I do 1/sampleRate, I get the time between each sample, right? Should I use the time variable in the interpolate() function? Is that what you mean by the z parameter in your example?

Also, is the code in the interpolate() function remotely salvageable? (Just how bad is this snippet? I figure knowing how I massively misunderstood the problem before coming here might help clarifying a better path to your example)

Code: Select all

float interpolate (float sample, float numSamples, float curVal, float targetVal, float deltaTime)
    {
        float x0 = deltaTime * sample;
        float x1 = deltaTime * (sample+1);
        float y0 = sample * curVal;            //I would pass samples here as bufferToFill[sample]
        float y1 = sample * targetVal;       //Target values and current values are 
        float y; //result                          //frequency values for  each sample
        float avg = (x0 + x1)/2;  //A value between x0 and x1, I guess this is really faulty
        if (y0 == y1) { return y0; }  //In case the frequency is equal to targetFrequency
        else
        {
            y = y0 + ( (avg) - x0 ) * ( (y1 - y0) / (x1 - x0) );
            
            //ideally, y here would be a "smoothed" gradual increase in the frequency.
            return y;
        }
You can then utilize control points for higher order polynomial interpolators as desired, generally in the form of spline(w, x, y, z, fraction) if you can come up with a rational source for those control points.
Whenever I see spline interpolation I think of huge linear systems to solve - is this even doable in a function that runs like hundreds of times a second such as getNextAudioBlock()?

(Thanks again for answering everyone - really really helpful for thinking even though I may be coming off as a complete dumbo)

Post

Yes the cubic spline is trivial and is the most common interpolation for sample playback. (In fact splines are trivial and easy to compute, whereas complex polynomials are often impossible: this is because the polynomial must be fit to the data by computing coefficients rather than naturally "splining"/"bending around" the data points. A common joke is "polynomial compression"; you can fit a polynomial to any data points but the number of coefficients will be larger than the original data :))

I suggest you avoid pre-mature optimization.

That is what computing a delta is: just use a normal division as I gave in my example code.

lerp(x, y, z) = x * (1.0 - z) + y * z;

In your set_parameter(x):
input_value = x;
input_sample_index = now; // where does it come from? This would usually be 0 in VST2 while VST3 can use deltas, as far as JUCE though? :shrug:

interpolation_length = time_in_seconds * sample_rate;
fraction = limit((sample_index - input_sample_index) / interpolation_length, 0, 1);
value = lerp(last_value, input_value, fraction);

... and then the problem with attempting to use such interpolation is: how do you handle when an input event happens during the interpolation of a previous event?

That would normally require:
last_value = lerp(last_value, input_value, fraction)

Such that a new input begins, replacing the old one.

My suggestion is that using interpolation to smooth parameter inputs is the worst possible way to do things. You might want to investigate another option such as demonstrating polynomial interpolators applied to re-sampling sampled signals or similar.

The reason interpolating doesn't make sense for unpredictable inputs is exactly that: they're unpredictable. Interpolation is all about blending between two or more known points. That means you need to know everything about at minimum two points.

With an input value you only know about one point: the current value as it was last input. You can artificially create a point to interpolate toward (such as +100 ms, this is called slew limiting) but you can't predict the future inputs.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote: In your set_parameter(x):
input_value = x;
input_sample_index = now; // where does it come from? This would usually be 0 in VST2 while VST3 can use deltas, as far as JUCE though? :shrug:
I believe it's just given as the sample variable in JUCE. Is the sample index the current sample being processed in, say, a buffer of 512 samples? (if the buffer size were 512, and I was iterating buffer[47] or something, would 47 be the sample index?) If so, then the index is here in the getNextAudioBlock() function:

Code: Select all

for (int *sample = 0*; sample < bufferToFill.numSamples; ++sample)
            {
                float value = targetAmplitude * sin(2 * double_Pi * targetFrequency * time);
                monoBuffer[sample] = value;
                time += deltaTime;
                
            }
aciddose wrote: I suggest you avoid pre-mature optimization.
Sorry, I don't think I get what you mean by that exactly :(
aciddose wrote:

My suggestion is that using interpolation to smooth parameter inputs is the worst possible way to do things. You might want to investigate another option such as demonstrating polynomial interpolators applied to re-sampling sampled signals or similar.

The reason interpolating doesn't make sense for unpredictable inputs is exactly that: they're unpredictable. Interpolation is all about blending between two or more known points. That means you need to know everything about at minimum two points.

With an input value you only know about one point: the current value as it was last input. You can artificially create a point to interpolate toward (such as +100 ms, this is called slew limiting) but you can't predict the future inputs.
I think I see what you mean. I'll try your tips this next Thursday - I've got two harsh exams coming up over the next two days and I'll be able to focus on it better as soon as they're gone. In the dreadful but extremely possible eventuality that I can't get the artifacting removed with the results - how could I start salvaging my current sine synth to showcase such resampling you mentioned?

Thanks once again for taking the time to share your expertise! It gave me some hope of actually finishing this and not flunking the term. :party:

Post

I'd generate a table to use for the sine function rather than sin(). You could then interpolate to produce different approximations and compute fourier transforms to show the magnitude of harmonics for various interpolators. This is the ideal case for the fourier transform as it only operates on continuous signals while a continuous interpolator is inherently continuous, so you can plot harmonics exactly without worrying about windows or other approximations.

For example the lowest level table is two points: -1 and +1. The most basic interpolator is dirac deltas which isn't practical, so the next level up is "zero order hold", AKA "nearest neighbor interpolation."

That would produce an integrated version of the dirac deltas ("-1th order") which is a "square wave", so rather than amplitude = 1, you'd have a = 1/N^1 "0th order".

Linear interpolation would be the next step which would be the integral of the square which is a "triangle" with 1/N^2. The next step is the basic cubic interpolator which produces a spline (a 2nd order parabola) with 1/N^3.

There are numerous other interpolators. You could come up with Nth order polynomial interpolators to demonstrate the trade-offs at play and compare to very high order interpolators like sinc FIR "polyphase filters" and so on.

Then of course you wouldn't be limited to just a table[] = sin(), but you could place any data points in the table such as a sample of a male "A" vowel or similar and plot the spectrum resulting from various interpolators.

Unfortunately we're talking about programming here, and 99% of the effort is knowing how to effectively write code which is a totally different subject than understanding the mathematics behind calculus and polynomial interpolators.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

This is almost entirely over my head, as I have no background in DSP. However, I do have some experience writing a curve-based drawing program with various types of curves. This may already be exactly what acidose is describing, but how about writing a tool which loads in a audio file, approximates it with some type of curve that passes through its control points, and resynthesizes it at a variable playback rate?
Incomplete list of my gear: 1/8" audio input jack.

Post Reply

Return to “DSP and Plugin Development”