Parameter smoothing for delay line?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

earlevel wrote: You may just be stating it in an awkward way that you don't intend, but...you do not want to initialize the filter when it changes. You'll have many incoming changes as the knob turns, and the filter should just continue to roll.
I initialize the filter once, before the plugin starts.
earlevel wrote: If you're running it at the sample rate, b1 should be in the 0.999 area (and a0 = 1 - b1). Looking at Echo Farm, it looks like I used 10 Hz for most of the knobs, and I wanted the delay knob to be very slushy to get the effect of a tape delay changing, so it's 0.7 Hz. That's about 0.9986 and 0.9999, respectively, at 44.1 kHz. In other words, your a0 should be a couple of orders of magnitude smaller.

If you're running the filter at 44.1 kHz, for instance, you have your filter cutoff at 740 Hz, and that's why you're getting zipper noise.
I'm setting a0 in this way now:

Code: Select all

a0 = 1 - exp(-1 / (0.05 * samplingrate));
b1 = 1 - a0;
and I'm using this function in the process block of the plugin:

Code: Select all

inline float process(float x){ z1 = (x * a0) + (z1 * b1); return z1; };
earlevel wrote: But you also can't be resetting the filter when the knob moves—that will give you zipper noise too.
So it would be correct to reset the filter only when the knob drag is ended instead of whenever its value changes?

Post

luketre wrote:
earlevel wrote: But you also can't be resetting the filter when the knob moves—that will give you zipper noise too.
So it would be correct to reset the filter only when the knob drag is ended instead of whenever its value changes?
No. What he's trying to say is that you don't want to reset the filter at all, except possibly when you do a full plugin reset (for VST that would be on resume() call).

The "zipper noise" is essentially the result of the high-frequency contents in the modulation waveform, because viewed as a continuous signal it has steps because of sparsity of changes. You then take the waveform with the "zipper steps" and then just lowpass it like any other signal, continuously. You are not smoothing the "steps" as such, you are smoothing the actual signal that is steppy.

Post

mystran wrote:
luketre wrote:
earlevel wrote: But you also can't be resetting the filter when the knob moves—that will give you zipper noise too.
So it would be correct to reset the filter only when the knob drag is ended instead of whenever its value changes?
No. What he's trying to say is that you don't want to reset the filter at all, except possibly when you do a full plugin reset (for VST that would be on resume() call).

The "zipper noise" is essentially the result of the high-frequency contents in the modulation waveform, because viewed as a continuous signal it has steps because of sparsity of changes. You then take the waveform with the "zipper steps" and then just lowpass it like any other signal, continuously. You are not smoothing the "steps" as such, you are smoothing the actual signal that is steppy.
I'd like to add that essentially whenever you reset the smoothing filter, you are in fact inserting a step that looks pretty much the same as the stuff you are trying to get rid of :)

edit: oops, didn't mean to add a new reply, just add the above to the previous.. well sorry about the bump

Post

I can't believe..

solved substituting this:

Code: Select all

a0 = 1 - exp(-1 / (0.5 * samplingrate));
with this:

Code: Select all

a0 = 0.0000453504455603
;

and then I discovered my error was to pass the sample rate out of prepareForPlay method.. sorry! :dog:

Post

I don't know what the etiquette regarding necromancy is here, but I figure it's better to bump this topic rather than start a new one. The replies here and in related threads have helped me a lot, but it would be great if somebody could lend me his ears and confirm I'm not doing anything stupid.

1. My delay algorithm simulates a fixed read head with an adjustable write head in front of it. Increasing the delay time increases the distance of the write head from the read point.
2. I'm doing linearly interpolated reads from the delay buffer.
3. Delay time is specified by the user in seconds. I have a one pole filter over this parameter that multiplies seconds by the sample rate and smoothly transitions to this new sample offset value based upon the delta between it and the old sample offset. For example, it would take 2 seconds to go from a delay time of 1.0 to 3.0. The actual values being interpolated would be 48000.0 to 144000.0. I used the model here.

This doesn't sound as zippery as it did before I read these threads, but it's still rather harsh and artificial to my ears. I'm not after a perfect emulation of analog tape, but I wonder if I'm overlooking something. Please have a listen

>>here<<

In this clip, I'm going from 0 delay time to 1.5s delay. Does something sound wrong to you?

I can paste plain C code if need be, but this post is running long already.

Thanks for your help.

Post

Shouldn't you be doing linearly-interpolated writes to represent the smoothly moveable write head?

If you fix the "tape speed" you might not even need to interpolate the reads.

Post

imrae wrote: Sun Jan 29, 2023 8:57 am Shouldn't you be doing linearly-interpolated writes to represent the smoothly moveable write head?

If you fix the "tape speed" you might not even need to interpolate the reads.
Thanks for your reply. This makes sense, though it's a bit backwards compared to what I'm familiar with. I implemented the lerp write as

Code: Select all

void writeArray(Array *a, float index, float sample) {
  size_t n = (size_t)index;
  float r = index - n;
  a->data[n % a->size] += sample * r;
  a->data[(n+1) % a->size] += sample * (1.0f - r); 
}
I think that's correct. It has to be additive, yeah? The read head zeroes its position on the tape after fetching the sample, so it's not a problem.

I did away with the -2π setting from the example I linked, and am using mystran's suggested settings for the filter instead.

Code: Select all

typedef struct Filter {
  float x;
  float a;
  float b;
  float z;
  float rate;
} Filter;

Filter filter(float rate) {
  Filter f = {0};
  f.rate = rate;
  f.a = 1.0f - exp(-1.0f / (0.05f * f.rate));
  f.b = 1.0f - f.a;
  return f;
}

void setFilter(Filter *f, float delayTimeSeconds) {
  f->x = delayTimeSeconds * f->rate;
}

void applyFilter(Filter *f) {
  f->z = (f->x * f->a) + (f->z * f->b);
}
When user input changes the delay time, `setFilter` is run. `applyFilter` runs on every clock tick. The actual delay code has a pointer to the filter's `z` field, which it adds to the read head phase to determine the sample offset for `writeArray`.

Going from 0.0 delay to 1.5s, it sounds better than yesterday:

>>clip<<

But it's noticeably zippery jumping from 1.5s to 8.2s. The write head position seems to jump too far, too fast, and the delay effect cuts out entirely for a while instead of smearing as expected.

>>clip<<

Some follow up Qs:

1. Is the first clip more or less what I should expect? It kind of sounds like a tape speeding up to me, but are there still definite glitches in it?
2. Most of the discussion in these threads has centered around smoothing knob input, and I think the problem in my second clip is related to that. Rather than filtering n steps from a knob turning from 1.5 to 8.2 (and passing through 2.5, 3.5...), I'm in a command line shell, asking the program to jump directly from 1.5 to 8.2s. Is there an alternative smoothing method I should look into for this scenario?

Post

Sorry, haven't read the entire thread, but IIUC you're talking about moving the write head instead of moving the read head.

The problem with moving the write head is that it cannot reasonably handle large time jumps. Imagine an analog device implementing this: even if you smooth the jump, you'll be essentially quickly going along a long piece of tape, writing a relatively short piece of input signal on it, which upon playback will be perceived as a slowdown by a few orders of magnitude.

I guess the reason it "sounds better" than moving the read head is that you are effectively "adjusting the interpolation cutoff" (a fixed cutoff relatively to the array becomes variable relatively to the output), and it happens to occur exactly in the right way. But I don't know if you can solve the jump problem.

A counterpart implementation with moving read head would need to dynamically adjust the interpolation cutoff, which in case of large jumps will require a very long interpolation window, so the problem doesn't go anywhere. It's just that ignoring that problem with moving read head doesn't result in gaps, it results in playback artifacts.

An intermediate solution is moving the tape, where you can "physically change the array writing speed", which could be cpu-costly at short delay times, or optimize the implementation. The interpolation cutoff problem doesn't go anywhere, but the artifacts produced by delay time jumps and lack of cutoff adjustment are much more "musical" and less digital artifact-prone.

Post

I'm inclined to support the idea that whatever the solution to a problem might be, it's probably not moving the write position.

Post

It's an interesting process to try and model, even if there are more elegant ways of reaching the same effect.

Physically, the head should be writing to all the tape (i.e. buffer) it moves across, not just the final position at each sample.

But also physically there should be some consequences to writing more/less quickly? I'm not really familiar with tape speed effects but I would imagine fast movements degrade the quality a bit.

Post

This is an interesting thread. I'm toying around with creating a tape delay. There are a few things that I can't get my head around.

A classic tape delay (Space Echo) has both multiple read heads and variable tape speed. I've exposed parameters for both, but does the variable tape speed in a digital emulation actually give any different, and desirable, result than a stepless variable read position? I'm introducing a lot of artefacts cause of sub sample read indices. Here I am trying linear interpolation but haven't managed to figure it out entirely. Is it even necessary or can this be faked with the read head position only?

Also, in order to get that glitchy feel when adjusting the delay time, is this a result of the tape not getting up to speed as quick as the knob turn? Or how do I get that effect?

Post

First all, you will need interpolation unless you snap the delay to whole samples. The better the interpolation, the less artifacts you get. If you've not written delays before, start with linear interpolation, but understand that (especially without any oversampling) linear interpolation is far from ideal. Cubic Hermite (aka. Catmull-Rom) is usually what I'd recommend as the "next step up" in quality and it's quite a bit better than linear, but if that's not good enough either then "proper" sinc-interpolation is also possible, though also more complex (especially as some optimizations like SIMD evaluation are pretty much required for decent performance).

If we're interested in varying the tape-speed over time though, there's another headache. Changing the tape-speed (or BBD clockrate) doesn't instantly change the delay, rather it changes how fast the signal written into the tape advances towards the read head. So ideally we should integrate the tape-speed over the distance it travels to know how long the signal that is currently under the read-head has taken to travel from the write head, which then gives us the actual delay time at which we should read from our digital delay buffer. The common theme with all accurate solutions to this problem is that they are somewhat complicated, but effectively it results in a form of smoothing with regards to the delay time. Even if this is not modelled accurately, smoothing the delay time will largely avoid glitches and give you a temporary pitch shift when the speed when signal is "written" and the speed when signal is "read" are different, which is probably the most important effect perceptually.

Another thing is that if the tape speed varies very rapidly, then playback "sampling rate" at which we read from the delay kinda varies, in which case sinc-interpolation theoretically breaks down and we should do something fancier (though things do get fairly quickly into the research territory here), but this is not a huge concern until the variation in speed is so rapid that pitchshift starts to get into the territory of octaves.. so if you're just adding some slight wow/flutter it's probably not worth worrying about and for special effects like tape-stop might not be terribly noticeable anyway. I'm not sure if physical tape can actually tolerate rapid enough speed changes (someone with more experience with tapes would have to comment on this) for this part to really be something to worry about though. With BBDs though it's a real concern.

Post

technodave wrote: Sun Oct 01, 2023 9:08 ambut does the variable tape speed in a digital emulation actually give any different, and desirable, result than a stepless variable read position?
Both can be made mathematically equivalent, and this is covered by the article linked a couple of posts above. The problem is that you get a pretty nice behavior with changing tape speed out of the box, while reaching the same effect with variable read position requires jumping through quite a number of hoops (again refer to the paper above).
technodave wrote: Sun Oct 01, 2023 9:08 amAlso, in order to get that glitchy feel when adjusting the delay time, is this a result of the tape not getting up to speed as quick as the knob turn? Or how do I get that effect?
Read the paper :D Short answer: the basic effect is there even with instantaneous change of tape speed.

Post

This paper presents the dual read head approach which works well for delay time modulation and for larger jumps as well: https://quod.lib.umich.edu/cgi/p/pod/do ... format=pdf

Post

Thanks for the great info guys. I tried to read the paper. I got the basic stuff about perception in the beginning and understood that changing tape speed pitches the recording somewhat, while just moving the tape read head doesn't really do that. The math was a bit too much but thanks @mystran for explaining how the actual delay is achieved. I have managed to record and read with speeds other than 1:1, since I understand that both record and read speed need to changed. What I found then was that changing the speed never changes the delay time, just like you mentioned. Now I assume I could adjust the read position in accordance to a calculated travel time between record and read. Gonna try to figure that calculation out.

Post Reply

Return to “DSP and Plugin Development”