Zipper Noise

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Alright chaps? :D

I'm in the closing stages of a new delay plugin and am experiencing the dreaded zipper noise when adjusting the delay time knob.

I'm pretty sure this is happening because the stored data being recalled from the delay buffer is jumping around as the delay time knob is moved and therefore the pointer to the buffer is moving, causing clicks, pops and zips. :(

I was wondering if the panel had experience of this and could possibly suggest the best route to fixing it. I'm thinking that filtering the output of the delay buffer is one approach, but maybe you have other ideas?

Code snippets appreciated :D

Post

try interpolating : the delay should be continuous, but your samples are not -> zip. So with some interpolations to have more differents delay values, you can *perhaps* achieve your goal.
Try linear, cubic, B-Spline :D - the last should be a joke, as it's... expensive -, ... interpolation

Post

thanks for that, Miles -

do you mean I should be interpolating the audio data from the delay buffer or the "change in delay time" caused by the user waggling the knob?

your post seems to intimate that there is no real cure for zipper noise in this scenario. am I interpreting you correctly?

Post

This might not work, but I'm just thinking as I'm typing...

What about a circular buffer with 6 pointers - the beginning, the end, a left locator and a right locator, and also the read and write pointers. A modification in the delay time could be a resetting of the left and right locators - where you cycle the r/w pointers (in addition to at the ends, of course). You'd then have time to do a quick fade about the locators to smooth out the transition (making sure not to overlap the current read/write position)

I'm not sure I've thought this through properly, but its just an idea.

You could alternatively use an interpolation algorithm to change the length of the entire buffer to the length of your new buffer, and scale the position of the read/write pointers to match. You'll get a pitch change though.

Post

thanks, texture. some good ideas in there - I'll do some more experimenting before I settle for something.

If I find a method that eliminates it, I'll post it here.

About the tuning thing - I've heard delays that use that - I think retro-delay does - and it's a weird old effect when the delay time gets changed. I'll try it out, though.

Post

If you're using a variable-length constant-sample-rate buffer, perhaps you should change to a constant-length, variable-sample-rate implementation??

Forever,




Kim.

Post

scuzzphut wrote: About the tuning thing - I've heard delays that use that - I think retro-delay does - and it's a weird old effect when the delay time gets changed. I'll try it out, though.
yeah - basically you're changing the speed of the tape. since the sample rate is contant, in order to change the tape speed, you have to change the size of the buffer. In order to retain all of the information in the buffer (as would happen when speeding up he tape), you would need to re-pitch the whole buffer - which implies interpolation and decimation. I think this might be the best option. EDIT: no actually its not!! (see post below...)

I've started working on a resampling algorithm, using a windowed sinc FIR. One thing I'm stuck on at the moment is getting a good routine for factorising numbers to reduce the number of taps in my filter (meaning I can do multiple interpolations rather than just 1).
Last edited by texture on Sat May 01, 2004 4:15 pm, edited 1 time in total.

Post

texture wrote: since the sample rate is contant, you would need to re-pitch the whole buffer
No no no! Keep the buffer fixed, just read it at variable speed! Don't resample the whole buffer, just resample the playback!

Forever,




Kim.

Post

Jeez wrote: No no no! Keep the buffer fixed, just read it at variable speed! Don't resample the whole buffer, just resample the playback!
Ahhh, I see what you mean.

It would work either way, and resampling playback would be better if the delay time changes a lot, because you wouldn't have to resample the entire buffer each time. The latter is obviously pretty costly & you may waste a lot of the 'results' of the resampling of the buffer.
However, by resampling the playback, you would be constantly resampling even when the delay time doesn't change, which would push up the average CPU consumption - you may not even really need to be resampling. You would also need to resample writes to the buffer. If the plugin supports long delay times, having a really short delay time would give rise to a lot of CPU & memory access for absolutely nothing. You will also lose a little bit of quality depending upon the quality of teh interpolation.

Neither is exactly optimal.

I wonder if there is a combination of the two - the buffer gets resampled as you play through it & written back, whilst the buffer gets adjusted in size until the two operations stabilise? Not sure if this would require an extra piece of information for each sample though relating to its rate, so if the user wants to waggle the control a lot, then they can expect more CPU consumption, but when the delay time is static, the plugin effectively self optimises the buffer to reduce CPU usage.

Post

texture wrote:I wonder if there is a combination of the two - the buffer gets resampled as you play through it & written back, whilst the buffer gets adjusted in size until the two operations stabilise? Not sure if this would require an extra piece of information for each sample though relating to its rate, so if the user wants to waggle the control a lot, then they can expect more CPU consumption, but when the delay time is static, the plugin effectively self optimises the buffer to reduce CPU usage.
Wow, that's a neat idea.

Mind you, resampling is actually pretty cheap (on CPU). It's essentially a single sampler voice (without all the filtering and enveloping etc).

Forever,




Kim.

Post

Jeez wrote: Mind you, resampling is actually pretty cheap (on CPU). It's essentially a single sampler voice (without all the filtering and enveloping etc).
I see what you're saying - this is undoubtably the best 'simple' solution. Although it isn't really that simple is it! :)

CPU efficiency with this method must depend on what type of filter you use for the resampling, and the size of your delay line. If you have a huge buffer for your delay, then it would start to get very inefficient.

In this instance, you can't just use the 'normal' oversampling technique where you zero pad your samples and then decimate by the same factor - you have to use some sort of 'propper' interpolation, because the interpolation factor and the decimation factor will not be the same in most instances.

For perfect interpolation you would have to use a sinc FIR that is infinite in length. Obviously this would be total overkill, but you could another method like windowed sinc or hermite. Even then, this is still pretty expensive. Aliasing would only really become problem when there has been a change in the delay time. You would still get inaccuracies even so. the inaccuracies might not be audiable in a number of situations.

If you use a lot of taps in the filter in order to get really high quality interpolation, then this will have a huge impact on performance.

If there was a way of self optimisation where you limit the undesired effects of interpolation when you don't actually need to interpolate, then this must be a good thing :).

Post

texture wrote:If there was a way of self optimisation where you limit the undesired effects of interpolation when you don't actually need to interpolate, then this must be a good thing :).
Well, that's a Holy Grail isn't it? We'd all love an antialiasing filter that continually readjusts its strength and accuracy to be only as CPU-intensive as required. :)

Forever,




Kim.

Post

well,

in AnalogDelay I use a fractionnal delayline with linear intepolation between 2 samples. And I use either linear or IIR interpolation for the (float) read position.
That sounds good enough to my hears

cheers

--remy

Post

remy - thanks for the hints.

could you flesh that out a bit , please? you have a working solution and that's the most important thing :)

(1) What exactly do you mean by fractional delay line?

(2) So when you read from the buffer, you do a simple linear interpolation between the current sample and the next sample, right? Doesn't that halve the frequency response - i.e. isn't that a form of lpf ?

(3) So your buffer is a fixed length and you read it at varying speeds using an interpolation algorithm to adjust the buffer read pointer? Am I understanding this right?

:help: .

and thanks so much to Jeez and Texture too for a fascinating thread. :)

Glad I started it now :hihi:

Post

looks like he's got a fixed length buffer, and reads it using a floating point index. In order to do so, you use the float to determine two samples that are 'either side' of your fractional index. Then you interpolate between them using the fractional position between the two samples.

Post Reply

Return to “DSP and Plugin Development”