General method for smooth parameter changes

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

A while ago I was given suggestions on doing smooth parameter changes. Mostly they involved doing some kind of filtering or interpolation on the parameter value. For example, if a low pass filter cutoff was changing from 20hz to 400hz, one could interpolate the cutoff over time between 20hz starting, and 400hz finishing.

I never really liked this idea. Mainly because it requires custom attention and detail for every filter. And in some cases it is not clear on how one interpolates between the starting and ending values. For example, how do you interpolate a high frequency "cut" switch? Its either on or off.

I propose a new algorithm for smoothing parameter changes. This technique can be applied to any effect, independent of its mode of operation or parameters.

Definition:

Given an effect E in its current state (denoted E0) with a given set of parameters, change to a new state with a new set of parameters (E1) smoothly over some fixed number of samples (usually the number of samples processed per output block, or the latency).

Method:

1) Produce the full output for E0, into a temporary buffer

2) Produce the full output for E1, into a temporary buffer

3) Calculate the final output as the linear interpolation between E0 and E1, where the interpolating parameter t varies from 0 (all E0) to 1 (all E1)

Analysis:

I don't know enough to comment on what audio artifacts will be introduced using this method.

This will introduce an additional amount of latency equal to the size of the output block, since the full signal from E1 is not written until the very end of the buffer.

There is a performance penalty, since the filter is calculated twice in one processing cycle (once for E0 and once for E1). This may require additional memory. This penalty might be offset by the need to recompute the filter state twice (instead of many times, as is the case when trying to vary the filter parameters smoothly).

Thoughts?

Post

thevinn wrote:
Thoughts?
This would essentially be like crossfading between the audio of the effect in it's different states? I suppose it could work for some things. Simple amplitude crossfading isn't very effective audio "morphing" though. That's why I think people prefer to mangle the actual internal state of the DSP processing while smoothing out the parameter changes. Granted, it's of course much more work to code in the DSP, but the results will be appreciated by demanding end users! :hihi:

Post

Hmmm interesting idea. I've never thought of that.
thevinn wrote: There is a performance penalty, since the filter is calculated twice in one processing cycle (once for E0 and once for E1).
This i see as a real dealbreaker for more complex stuff as a complete synth. The CPU would have spurious peaks of double the load.
Could work with something simpler as an eq.
thevinn wrote:This penalty might be offset by the need to recompute the filter state twice (instead of many times, as is the case when trying to vary the filter parameters smoothly).
This i don't quite get. For the effect to be inaudible it would have to happen fairly often,wouldn't it ? You don't have to update filter coefficients per sample for example. Say you need to do the fade in 64 samples just as an example. Do you really need to update the coefficients more frequently than that ? And even if you have to, that would still apply for your idea, wouldn't it ?

Post

Xenakios wrote:This would essentially be like crossfading between the audio of the effect in it's different states?
Yes, that is exactly what I am saying.
I suppose it could work for some things. Simple amplitude crossfading isn't very effective audio "morphing" though.
Can you comment on how the result might sound for some of the interesting cases?
jupiter8 wrote:The CPU would have spurious peaks of double the load.
Could work with something simpler as an eq.
Yeah there would be peaks. Or as an alternative, both filters can just be computed all the time, even if the parameters didn't change. This is likely the preferred method, since IIR filters have a state memory that should be kept updated even if the filter output is thrown away. For example, a resonant filter that is temporarily turned off - I would think you still want to keep the filter running internally so that the state stays updated for the case where the users clicks the filter back on.
This i don't quite get.
What I am saying is that computing the filter for two sets of parameters (my technique) might be significantly more efficient than computing it say, ten times (for smoothly interpolating the parameters as in the normally accepted case).

Something else to consider is that this method should not suffer from the problem of IIR filters "blowing up" when parameters are changed since the zippering or other artifacts will likely happen during the crossfade while we are outputting mostly the previous parameter results.

Post

Okay I'm seeing some problems with this approach.

I just thought of the case of a simple phase inverter...interpolating between the original (normal phase) and filtered (inverted phase) will cause the overall amplitude to go to zero, then swell back up. Probably not what we want.

I would imagine for a phase inverter, to interpolate smoothly we would want the phase to gradually shift from 0 to 180 over time.

Post

you can interpolate coefficients rather than parameters.

it's easiest and most reliable to lowpass filter on the per-block level in order to reduce computation. internally you can linearly interpolate, although you're correct that this does add one block of lag that is more a consequence of block processing than any specific filtering method. any type of filter will add lag unless you know ahead of time what the values will be.

personally, i run a lowpass per-sample for any parameter which requires smoothing before transforming into coefficients and it doesn't add a significant amount of cpu time.. the filters are allocated dynamically. if you were changing a ton of parameters at once it might matter, but the important thing to note is the filter only applies to inputs from the gui or automation - internal values are not filtered since they're already generated per-sample.

any system which uses block processing and filters coefficients rather than parameters should result in near zero over-head for the filtering process.

Post

thevinn wrote: change to a new state with a new set of parameters (E1) smoothly over some fixed number of samples (usually the number of samples processed per output block, or the latency).
You should interpolate at the same rate at which the automation events arrive. If that happens to be 1 event per block, then interpolating over 1 block is correct.
On the other hand, if automation happens on average twice per block, interpolate over half the time.

Post

hmm, I can see where you're coming from Jeff, but consider the case of very small blocks.

Say a master vol parameter, for whatever reason, shifts from 1 to 0 (or 0 to 1), and the block size is 1 sample. In FL this is a very real possibility. The next block, indeed several subsequent are different lengths, and none touch that parameter again.

In this case there would be this sudden drop (or rise) and more than likely audible click. It would be preferable to have something that operates outside of the confines of a block, and smooths the change out over a fixed sample period.

Another case: a filter cutoff is sent two param changes, from 0.75->0.5 then to 0.25. The first block length is 512samples long, then the second block is 2048 samples. The distance travelled by each param value change is the same, but the length of time to do it is dramatically different. If the changes were interpolated across the block it could lead to a click in the first block but not in the second. If both were smoothed over a set period that is known to be safe, this can be avoided.

Overall, I don't disagree with what you suggest; but there are edge cases which could cause unusual results in the incresing number of hosts that allow non-fixed block lengths :)
Image

Post

duncanparsons wrote:hmm, I can see where you're coming from Jeff, but consider the case of very small blocks.

Say a master vol parameter, for whatever reason, shifts from 1 to 0 (or 0 to 1), and the block size is 1 sample. In FL this is a very real possibility. The next block, indeed several subsequent are different lengths, and none touch that parameter again.

In this case there would be this sudden drop (or rise) and more than likely audible click. It would be preferable to have something that operates outside of the confines of a block, and smooths the change out over a fixed sample period.

Another case: a filter cutoff is sent two param changes, from 0.75->0.5 then to 0.25. The first block length is 512samples long, then the second block is 2048 samples. The distance travelled by each param value change is the same, but the length of time to do it is dramatically different. If the changes were interpolated across the block it could lead to a click in the first block but not in the second. If both were smoothed over a set period that is known to be safe, this can be avoided.

Overall, I don't disagree with what you suggest; but there are edge cases which could cause unusual results in the incresing number of hosts that allow non-fixed block lengths :)
+1 I have fallen foul of this before with Fruity Loops. Since the block size changes so much then the rate of the interpolation keeps varying.

My plugins now use an internally fixed blocksize and then interpolate parameter changes across this. That way the speed of the parameter changes remains constant no matter how many frames the host passes.

Post

Whats this block size? I thought the block size was the amount of samples that gets processed in each output buffer. This is directly related to the underlying hardware's buffer/latency.

Anyway, with respect to automation events, I assume that means parameter changes? I apply parameter changes before each output block is computed. So that means that no parameter changes in the middle of blocks - if my terminology is accurate.

Post

thevinn wrote:Whats this block size? I thought the block size was the amount of samples that gets processed in each output buffer. This is directly related to the underlying hardware's buffer/latency.
The hardware buffer size really doesn't need to be involved at all. Plugin hosts and other audio apps are of course free to implement their DSP processings at any buffer sizes they wish (also varying!), as long as they in the end output well formed buffers into the audio hardware callback. It just happens to be so that many VST plugin hosts for example have a 1:1 mapping between the audio hardware and DSP buffer sizes, but it by no means should be expected by 3rd party plugin/DSP coders that is universally the case.

Post

thevinn wrote:Okay I'm seeing some problems with this approach.

I just thought of the case of a simple phase inverter...interpolating between the original (normal phase) and filtered (inverted phase) will cause the overall amplitude to go to zero, then swell back up. Probably not what we want.

I would imagine for a phase inverter, to interpolate smoothly we would want the phase to gradually shift from 0 to 180 over time.
yes, unfortunately there will be some phase issues. also for e.g. switching a lp/hp biquad iir on and off would mean a maximum phase cancellation at Fc when a linear fade is at 50% between the two states. but since the fade will be very fast it shouldn't be that noticeable unless you have complete phase inversion as you describe above.

on the filters "blowing up" issue, as i understand it, a filter that "blows up" (is overloaded) will need some time before the DE coefficients normalize and only then the state snapshot should be taken, so that the fade may proceed. but the problem here might be that this decay time (towards normalization) will be non-constant and dependent on the overload state itself.
linux/win32 framework for plugins and exacutables:
holos.googlecode.com

Post

For switching between filter types you could interpolate the zeroes of the transfer function, but in general there almost certainly is no "one size fits all" solution to this problem. You're much more likely to get good results by considering each case separately, and while it'd be nice to never have any clicks, some settings are just discreet in nature, and wasting a lot of (development or cpu) time on smoothing them might not be very cost effective.

Not all parameters even benefit much from smoothing. Obvious discontinuities in output obviously produce clicks, but once the discontinuity is in the first or second derivative, it will often be largely inaudible. Sometimes you might even prefer a faint click rather than pitch shifting or other artifacts you get from interpolation of delay times and the like.

But really, it all depends on the particular parameters and the purpose of a particular plugin.

Post

Hi guys,

Marc Gallo from Studio Devil here.

I stumbled across this thread, and had to chime in.

Cross-fading between two instances of the same effect, with different settings, is by far the best way to eliminate clicks, pops, etc from most effect changes...that's how I've been doing things since at least 2005.

It is costly, in terms of CPU, because you're running the effect twice during the morph, and there are way better cheaper methods if your application can tolerate them.

For example, most simple filters work acceptably with just morphing the coefficients...alternately, you can morph the center frequency, q, and gain smoothly, and compute the coefficients each morphing frame...these both work well, for simple filters.

For more complex effects, and higher order filters, even this trick can fail, leading to unstable intermediate ceofficient sets. And just forget it for tube modeling or other complex things...that's why I worked with crossfading two effects a long time ago, and found it to work really well...when you have ample CPU to throw at the problem.

For what it's worth, I've been using this technique for a long time now, and had great results with it.
Hope this helps.
I really enjoyed reading some of the posts on here.


Marc Gallo, CEO / founder
Studio Devil
www.studiodevil.com

Post

Hi there,
I'd like to ask a related question to any developers out there.
I'm currently using a simple one-pole filter to interpolate the parameters before setting the filter coefficients. In pseudo code:

Code: Select all

void processBlock() {
    double desiredFrequency = getParameter();
    for (int i = 0; i < samplesInBlock; i++) {
        actualFrequency += (desiredFrequency - actualFrequency) * .5;
        calculateFilterCoefficients(actualFrequency);
        processFilter(sampleBuffer[i]);
    }
}
One disadvantage with this method, although necessary, is that the filter coefficients have to be recalculated continually as 'actualFrequency' moves towards 'desiredFrequency'.

The other disadvantage is that 'actualFrequency' never quite reaches 'desiredFrequency', and so an additional check (not covered in the pseudo code above) would need to be carried out to see whether 'actualFrequency' is close enough to 'desiredFrequency' that we no longer need to recalculate the filter coefficients.

Your input as to whether this is a standard approach, or whether there's a better approach, would be very welcome.

Thanks :)

Post Reply

Return to “DSP and Plugin Development”