How to compensate for LP detuning in Karplus Strong

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

It' s ok Mystran 😊 I was simply saying that... it works without artifacts and this is a fact.. after all anybody can actually try it since I explained the details... surely I am not going to publish a paper on it lol because ok in that case all the underlying math should be demonstrated rigorously of course. Once I was using Mathemathica when I was into pure maths (a different story than applied maths) but there are situations where you cannot afford to spend days on calculi (time is scarce and priorities are strict) or to diverge from your current goal too much...

Post

mystran wrote: Thu Feb 13, 2020 9:06 pm While sinc-interpolation is more expensive, it also has a problem with feedback loops in the sense that the (unavoidable) latency puts a limit on the minimum delay length (eg. if you have 16 samples of latency, then you can't have a loop shorter than 16 samples, because that would require negative delay). One can reduce this problem by oversampling, but then it gets even more expensive. I would usually suggest "cubic-interpolation"
Since you mentioned this, I was wondering:

Because the latency (for linear phase-response) only matters inside a feedback loop, is it ever worth upsampling and then using a much cheaper interpolation for the feedback? E.g. a linear interpolator is N times more work if you've upsampled xN, but (intuitively) the error is N^2 better, same tradeoff as increasing your windowed-sinc length. (Cubic would presumably be N^4 better, right? Which seems like it might produce some good trade-offs.)

The Karplus-Strong resonator is itself linear - so could you (1) resample without worrying about latency, or non-linear phase, or having a particularly flat pass-band, and then (2) either accept these issues or compensate a bit once you're back down to your original sample-rate?

Post

signalsmith wrote: Mon Feb 17, 2020 11:58 am Because the latency (for linear phase-response) only matters inside a feedback loop, is it ever worth upsampling and then using a much cheaper interpolation for the feedback? E.g. a linear interpolator is N times more work if you've upsampled xN, but (intuitively) the error is N^2 better, same tradeoff as increasing your windowed-sinc length. (Cubic would presumably be N^4 better, right? Which seems like an even better tradeoff.)
Not sure about the error metrics exactly... but the general idea is solid.

For longer loops where the latency isn't an issue, using sinc-interpolation directly is potentially slightly faster than oversampling if you don't need oversampling for anything else (eg. non-linearities). Obviously if there is a lot of other processing in the loop, then oversampling will also increase the cost of those.

As for non-linear phase... this is mostly a problem when it's inside the loop, because there it will detune the harmonics. If it's from the resampling and hence outside the loop, then it doesn't do any additional damage beyond what you would expect from non-linear phase for resampling in general.

Post

Sorry, errata corrige (in case somebody in future may end up in this thread interested)
My formula ending with fr=fr^2 was a too rude approximation. Feedback in the K/S process amplified the error. Here is an almost perfect approximation of the compensating filter I invented.

So, we have the linear interpolation to read a sample y from a fractional delay buffer position p:
i=(int)p
f=p-i
y[p]=buffer[ i ]*(1-f)+buffer[i+1]*f

Interpolation is a fractional delay filter of form:

y[t]=x[t]*(1-f)+x[t-1]*f

with maximal hf attenuation centered at Nyquist for f=0.5
So I place another filter downstream to do the opposite task to compensate i.e a "nyquist resonator" which has null effect for f=0 and max effect for f=0.5 and again null effect for f=(1). Such filter is:

y[t]=x[t]-f'*(y[t-1]-x[t])

(I call it feedback comb filter because it would be one if delay was >1 but it is still formally one)
Now, here is the correct way of computing f' in function of f:
f'=2*f;
If(f'>1) f'=2-f';
f'=f'/(2-f') -> the quadratic approx was too coarse

Now playing my gimmick happily with flat hf response for every note and perfect pitch resolution ^_^
I simply missed to check the spectrum analyzer with feedback close to one, so I didn' t notice the error of quadratic approximation amplified. Now it is pretty perfect

Post

elena2 wrote: Thu Feb 20, 2020 1:40 pm Sorry, errata corrige (in case somebody in future may end up in this thread interested)

[...]

Now playing my gimmick happily with flat hf response for every note and perfect pitch resolution ^_^
I simply missed to check the spectrum analyzer with feedback close to one, so I didn' t notice the error of quadratic approximation amplified. Now it is pretty perfect
So at a high level: the ideal fractional-delay filter should have phase proportional to frequency, to give constant delay for all frequencies. The linear-interpolation filter (although intuitive) doesn't have this property - and also you haven't accounted for the phase-response of your "correction" filter at all.

Let's take a quick look at the equations. If we rename f/f' as f1/f2 for readability, and call the result of your first filter (the FIR one) z[t], then we have:

Code: Select all

z[t] = x[t]*(1 - f1) + x[t - 1]*f1

y[t] = z[t] - (y[t - 1] - z[t])*f2
     = z[t]*(1 + f2) - y[t - 1]*f2
First, we're going to re-arrange the second equation above, to get z[t] in terms of y[t]:

Code: Select all

z[t] = (y[t] + y[t - 1]*f2)/(1 + f2)
     = y[t]/(1 + f2) + y[t - 1]*f2/(1 + f2)
Let's look in particular at the case where your fractional-delay f1 < 0.5. In this case, using your definitions we get f2 = f1/(1 - f1). That means that 1/(1 + f2) = (1 - f1), and f2/(1 + f2) = f1. So...

Code: Select all

z[t] = y[t]*(1 - f1) + y[t - 1]*f1
But hang on - right at the top, we had something suspiciously similar:

Code: Select all

z[t] = x[t]*(1 - f1) + x[t - 1]*f1
There is only one solution for this: y[t] = x[t]. If your fractional delay is < 0.5, then your two filters cancel each other out. The result is not delayed at all.

I'm not going to go through the case for f1 > 0.5, or f1 = 0.5. The short version: you've ended up with an all-pass filter - like this, but not tuned properly to get the right delays.

-------------------------------------------------------------

In summary:

What you started with is actually a pretty good insight - this idea of a paired FIR filter and a feedback filter. This is how IIR all-pass filters are structured! In fact, one view of IIR filters in general is an FIR filter plus the "inverse" of an FIR filter (a.k.a. a pure feedback filter).

You were remarkably close to producing the all-pass filter from that CCRMA link above (which is the "completed" version of your idea). I think if you dug into the theory here, you might find it satisfying, and you could design things with a solid backing (and avoid errors like accidentally cancelling out your own filter).

But I'm going to stop following this thread, because I don't think this particular kind of iteration is a good way to learn, or to design filters without basic errors in them. I hope you do get around to tackling the maths. :)

Post

Signalsmith, thanks for your feedback. My problem is that too many formulas tend to confuse me and give me soon a headache, that' s why I often choose the pathetic approach of attempting to rediscover things my own way lol.. now let me digest your explanation and the link above... if there are some values for which the filters cancel each other I have still to discover why my algorithm now appears to sound fine and to resolve pitch with continuity then... if they cancel, I should get integer delays at some point I suppose...

Post

Mm ok.. all that makes sense now. Thank you a lot Signalsmith, and sorry for my pathetic attemp to reinvent the wheel. There is always something new to learn !

Post

(Message deleted)
Last edited by elena2 on Thu Feb 20, 2020 10:14 pm, edited 1 time in total.

Post

Sooo... the final result turned out very simple and effective:

y[t]=f*buffer[i+1]+buffer[ i ]-f*y[t-1]

By now everything seems and sounds fine (strangely enough even my broken gimmick seemed so lol but analyzing the waveform you could actually notice that for some f the filtere canceled each other)
That' s all... hopefully 😀 Thanks again to you all for your precious help

Post

elena2 wrote: Thu Feb 20, 2020 4:24 pm My problem is that too many formulas tend to confuse me and give me soon a headache, that' s why I often choose the pathetic approach of attempting to rediscover things my own way lol..
I've said this plenty of times, but I'll say it again: learning math is usually a struggle for everyone and it just takes time, that's perfectly normal. On the bright side, once you finally get the hang of something, it can simplify things a lot, because it can give you a more effective framework to solve your problems and avoid much of the guesswork, at least until you hit the next road-block.

Here's a quote from an interview of Sir Andrew Wiles who found a proof for Fermat's Last Theorem:
‘Mathematicians struggle with mathematics even more than the general public does,’ said Sir Andrew. ‘We really struggle. It’s hard.’
I don't think there is anything really wrong with reinventing things as such. Rather I think it can be an effective learning tool, as long as you keep an open mind and accept that whatever you come up with might have limitations that only become apparent later.

That said, if you're serious about DSP, then learning a little bit of complex analysis and other "engineering math" is certainly a worthwhile investment in the long run, but don't worry: we know it's hard. :)

Post

Can you believe I have that book ? As I said, once I was in pure math. But.. wise words, fully agreeing

Post Reply

Return to “DSP and Plugin Development”