General method for smooth parameter changes

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

I'm using a one pole filter that is set to a fixed cutoff to transition from the current param value to the desired value over a few milliseconds. I think that will probably sound a good bit smoother than the average-the-difference approach that you have sketched out. I stop running the one pole filter once the two values have converged "close enough" (I'm using 0.001 as the "close enough" value).

I don't think there is any way around recalculating the coefficients while the parameter is still changing.

Post

ajl1606 wrote: Thu Mar 16, 2023 4:08 pm

Code: Select all

void processBlock() {
    double desiredFrequency = getParameter();
    for (int i = 0; i < samplesInBlock; i++) {
        actualFrequency += (desiredFrequency - actualFrequency) * .5;
        calculateFilterCoefficients(actualFrequency);
        processFilter(sampleBuffer[i]);
    }
}
First, I want to note that hard-coding the smoothing filter coefficient (.5) like this makes your smoothing dependent on sampling rate which is less than ideal... but let's look at the other stuff:
One disadvantage with this method, although necessary, is that the filter coefficients have to be recalculated continually as 'actualFrequency' moves towards 'desiredFrequency'.
Generally speaking whatever type of smoothing you do, you can't really avoid recalculating coefficients. If you don't want to do that every sample, then the normal approach is to compute the modulation at some lower rate (eg. once every 16 samples or something) and then interpolate the actual coefficients or possibly some intermediate form that avoids the most expensive computations if the filter in question doesn't behave well with direct interpolation of coefficients (though mostly this is less of an issue for filters that behave well under modulation to begin with).
The other disadvantage is that 'actualFrequency' never quite reaches 'desiredFrequency', and so an additional check (not covered in the pseudo code above) would need to be carried out to see whether 'actualFrequency' is close enough to 'desiredFrequency' that we no longer need to recalculate the filter coefficients.
An issue related to this is that a one-pole smoother is also very lop-sided if there are steps in the input signal: it starts moving towards the target fast, but slows down as it approaches the target. For these reasons some variation of linear interpolation (with straight slope and finite interpolation time) usually works better for things like GUI parameters that need a lot of smoothing.

Post

First, I want to note that hard-coding the smoothing filter coefficient (.5) like this makes your smoothing dependent on sampling rate which is less than ideal
That's a very good point! Perhaps, I could calculate the coefficient at runtime using, for example:

Code: Select all

smoothingFactor = 1.0 / (10.0 / 1000.0 * sampleRate);
That's also interesting what you say about the one-pole smoother being lop-sided when there are steps in the input signal. Currently, I'm implementing it for setting the frequency cut-off (a double-precision value) of a biquad filter, so I don't imagine it's a problem, but I'll bear that in mind for other applications.

Thanks mystran and sephirah-mike for your help. It's good to know that this is more-or-less a normal approach for parameter smoothing.

Post

ajl1606 wrote: Fri Mar 17, 2023 11:48 am
First, I want to note that hard-coding the smoothing filter coefficient (.5) like this makes your smoothing dependent on sampling rate which is less than ideal
That's a very good point! Perhaps, I could calculate the coefficient at runtime using, for example:

Code: Select all

smoothingFactor = 1.0 / (10.0 / 1000.0 * sampleRate);
The exact formula to set a chosen time-constant (~63% decay) is 1-exp(-time/samplerate). [brain is foggy, hope I got that right from memory]
That's also interesting what you say about the one-pole smoother being lop-sided when there are steps in the input signal. Currently, I'm implementing it for setting the frequency cut-off (a double-precision value) of a biquad filter, so I don't imagine it's a problem, but I'll bear that in mind for other applications.
Basically what I'm trying to say is that if you have something like automation coming at a reduced rate, then a one-pole will create a sort of bouncing pattern where each new automation even makes it chase towards the new value faster, but then it always slows down before the next one arrives. So it's not very effective in reducing "zipper noise" (eg. linear interpolation is arguably better) unless you're smoothing a lot, but often we would prefer not to smooth more than necessary.

That said, if "biquad" means a direct form filter, those don't modulate particularly well so you'll end up with further artifacts. It's not a precision issue, it's an issue with how a direct form filter stores it's state. For applications where you want smooth modulation, probably the best alternative is the trapezoidal SVF which is roughly the number of operations (well, actually it might be even cheaper if you recompute coefficients every sample and it's some standard response), butter smooth under modulation and fully general so whatever response you can get out of a direct form biquad, the SVF can do it... and it doesn't mind it's coefficients being interpolated, something you really don't wanna do with a direct form biquad.

Post

Ah, I see what you mean now - if you were to draw the 'actualFrequency' on a graph, you'd see a series of ramps as it approaches and slows, approaches and slows.

About the biquad (direct form, second-order) - I originally used a state variable filter, but I had trouble mapping the frequencies accurately, especially near Nyquist, and I didn't want to oversample (perhaps that's a separate forum post!). It could just be that I hadn't implemented the SVF correctly, but in my experience I've always had more control with biquad filters.

Post

ajl1606 wrote: Fri Mar 17, 2023 6:16 pm About the biquad (direct form, second-order) - I originally used a state variable filter, but I had trouble mapping the frequencies accurately, especially near Nyquist, and I didn't want to oversample (perhaps that's a separate forum post!). It could just be that I hadn't implemented the SVF correctly, but in my experience I've always had more control with biquad filters.
Did you use a trapezoidal SVF or an old-style Chamberlin? The former can reasonably replace every biquad you ever use anywhere (you can even take direct form coefficients and solve for the SVF coefficients, though for most "standard" responses it's easier and faster to just design directly), while the latter is problematic in several ways.

The Art of VA filter design is probably the best resource for trapezoidal filters, but the basic idea is that (implicit) trapezoidal integration is essentially equivalent to bilinear transform so any analog design (and response) carries over as you'd expect with BLT and tuning is a matter of prewarpping with tan(pi*f/fs).

Post

ajl1606 wrote: Thu Mar 16, 2023 4:08 pm I'm currently using a simple one-pole filter to interpolate the parameters before setting the filter coefficients.
The other disadvantage is that 'actualFrequency' never quite reaches 'desiredFrequency', and so an additional check (not covered in the pseudo code above) would need to be carried out to see whether 'actualFrequency' is close enough to 'desiredFrequency' that we no longer need to recalculate the filter coefficients.

Your input as to whether this is a standard approach, or whether there's a better approach, would be very welcome.
Interpolating parameter events is a form of upsampling. Therefore the 'technically correct' way of smoothing them is a low-pass filter. To be 'more correct' the filter cutoff should be related to the rate at which the parameter updates happen. i.e. you don't want so much filtering that the parameter takes ages to change, nor so little that the parameter changes sound like a series of discrete 'steps' ('zipper noise').
That said, I find it complicated to decide when a filter is near enough to 'settled' because it takes infinite time to settle completely.
So instead I use linear interpolation on the parameter value, the rate of change adapts to the rate at which the parameter change events are arriving. i.e. for infrequent updates, I use more smoothing, for rapid updates I use less. A linear ramp will always take a finite, predictable time to arrive at its destination. So it's easier to detect the end of the ramp and switch back to a more efficient (non-modulated) filter alogrithm.

Post

I've been using a lossy integration on parameters since ~2001 in Xhip. It doesn't work for blending between "frames" like when you use a modulation rate (sample rate / 16), that requires linear interpolation applied on top of the smoothing. Most of the artifacts are not impactful at very low frequencies.

I can open-source this under modified BSD or something.

Code: Select all

	template <typename T>
	struct pf_coefficient_t
	{
		pf_coefficient_t() : time(T(0.0)), coefficient(T(0.0)), samples(0) {}
		void set_rate(const float sr, const float _time) { set_rate_spec(sr, _time, T(0)); }

	private:

		void set_rate_spec(const float sr, const float _time, const float &type)
		{
			time = _time;
			const float speed = _time > 0.0f ? intgr::spd(0.01f, _time * sr) : 1.0f;
			coefficient = speed;
			// should be: time * sr == intgr::smp(speed, 0.01)
			samples = 1 + i32(std::ceil(_time * sr));
		}

		void set_rate_spec(const float sr, const float _time, const i32 &type)
		{
			time = _time;
			const float speed = _time > 0.0f ? intgr::spd(0.01f, _time * sr) : 1.0f;
			// 2x because m320 gives 1/2 (7fxx x 7fxx = 3fxx)
			coefficient = 2 * i32(std::floor(0x7FFFFFFF * speed));
			if (speed > 0.5f)
				coefficient = i32(std::floor(0x7FFFFFFF * speed));
			// should be: time * sr == intgr::smp(speed, 0.01)
			samples = 1 + i32(std::ceil(_time * sr));
		}

	public:

		float time;
		T coefficient;
		i32 samples;
	};

	template <typename T>
	struct pf_element_t
	{
		pf_element_t() : destination(nullptr), target(T(0.0)), time(0), coeff_index(0) {}

		void operator=(const pf_element_t &p)
		{
			destination = p.destination;
			target = p.target;
			time = p.time;
			coeff_index = p.coeff_index;
		}

		void configure(T *_destination, T _target, const pf_coefficient_t<T> &coeff, const int _coeff_index)
		{
			destination = _destination;
			target = _target;
			// fits in signed 24-bit integer
			ASSERT(coeff.samples >= 0 && coeff.samples < 8388608);
			time = coeff.samples;
			// fits in signed 8-bit integer
			ASSERT(_coeff_index >= 0 && _coeff_index < 128);
			coeff_index = _coeff_index;
		}

		void operator()(const pf_coefficient_t<T> &coeff) { tick(coeff); }

	private:

		void tick(const pf_coefficient_t<float> &coeff)
		{
			*destination += (target - *destination) * coeff.coefficient;
			--time;
		}

		void tick(const pf_coefficient_t<i32> &coeff)
		{
			*destination += m320(target, coeff.coefficient) - m320(*destination, coeff.coefficient);
			--time;
		}

	public:

		T *destination;
		T target;
		i32 time : 24;
		i32 coeff_index : 8;
	};

Code: Select all

namespace std
{
	template <typename T> inline T nroot(const T n, const T r) { return T(std::pow(n, T(1.0) / r)); }
	template <typename T> inline T apow(const T n, const T r) { return T(std::log(n) / std::log(r)); }
}

Code: Select all

namespace intgr
{
	template <typename T> inline T pos(T spd, T smp) { return T(std::pow(T(1.0) - spd, smp)); }
	template <typename T> inline T spd(T pos, T smp) { return T(T(1.0) - std::nroot(pos, smp)); }
	template <typename T> inline T smp(T spd, T pos) { return T(std::apow(pos, T(1.0) - spd)); }
}

Code: Select all

#if defined(_WIN32)
#include <intrin.h>
template <int shift>
inline __int32 m32s(const __int32 A, const __int32 B)
{
	return ((unsigned __int64)__emul(A, B) >> shift) & 0xFFFFFFFF;
}
#else
template <int shift>
inline i32 m32s(const i32 A, const i32 B)
{
	return (u64(i64(A) * i64(B)) >> shift) & 0xFFFFFFFF;
}
#endif
// macro wrapping a template (should inline, template wrapping a template may not)
// >>31 gives a proper fp0.32 multiply but will require a 64-bit shift instruction
#define m32(A,B) m32s<31>(A, B)
// >>32 allows AMD64 32-bit registers to be used which should be faster
#define m320(A,B) m32s<32>(A, B)
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

ajl1606 wrote: Thu Mar 16, 2023 4:08 pm Hi there,
I'd like to ask a related question to any developers out there.
I'm currently using a simple one-pole filter to interpolate the parameters before setting the filter coefficients. In pseudo code:
...
One disadvantage with this method, although necessary, is that the filter coefficients have to be recalculated continually as 'actualFrequency' moves towards 'desiredFrequency'.

The other disadvantage is that 'actualFrequency' never quite reaches 'desiredFrequency', and so an additional check (not covered in the pseudo code above) would need to be carried out to see whether 'actualFrequency' is close enough to 'desiredFrequency' that we no longer need to recalculate the filter coefficients.
That's a simple issue of optimizing your implementation. You don't need to filter the "cutoff", you can filter the coefficient directly. It's only when you have other coefficients calculated based upon the filtered coefficient that you need to recompute them. That's not typically an issue though, you can compute sets of four matching coefficients for example and use SSE instructions to perform the filter and interpolation. (For filter cutoff frequencies, it's almost impossible to discern 12 bit precision in pitch, so just generate a table[4096][N] and linearly interpolate between the two neighboring indexes.)

As far as "not reaching" the asymptote, that just requires you to count the number of samples to run the filter and force the destination value to the target at the end. That's extremely efficient, and it allows you to only apply the filters using a stack with a loop (pop the stack when target reached, order doesn't matter so move end to popped gap) rather than manually typing in all that redundant copypasta/spaghetti code.
Last edited by aciddose on Sat Mar 18, 2023 6:38 am, edited 1 time in total.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

garbagio (hit quote instead of edit)
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

The Art of VA filter design is probably the best resource for trapezoidal filters
Thanks mystran, that looks like an excellent resource. If I'm able to replace my biquads, I would gladly do it :D
So instead I use linear interpolation on the parameter value, the rate of change adapts to the rate at which the parameter change events are arriving
That seems like a sensible approach, thank you.
You don't need to filter the "cutoff", you can filter the coefficient directly
aciddose - are you suggesting that the parameter value is fed directly into the filter coefficient calculation, and that the coefficients themselves will interpolate (?) towards their desired values over time?

Post

Jeff McClintock wrote: Sat Mar 18, 2023 2:52 am Interpolating parameter events is a form of upsampling. Therefore the 'technically correct' way of smoothing them is a low-pass filter.
That's a somewhat arguable statement, because it implicitly compares control signals to audio. While such comparison might be relevant to measure the aliasing generated by control signals, it's not just about aliasing. It's also not only about aliasing for audio signals: you clearly don't want to lowpass too much, otherwise everything sounds completely muffled. The cutoffs involved with control signal interpolation are usually already way too low, so that aliasing is not an issue at all, and completely different aspects become relevant. In particular, the transient response curve becomes absolutely critical. OTOH 1 pole lowpass would be very poor as a means to suppress aliasing, plus we don't bother to generate an originally bandlimited signal to begin with (so interpolating control signals is a bit like upsampling non-bandlimited audio, and it's well known that the latter is not really correct). So, I don't think one can use a reference to audio to claim what is technically correct and what isn't. Very different factors are in focus here.
You don't need to filter the "cutoff", you can filter the coefficient directly
This is somewhat dangerous, because this generates inconsistent sets of coefficients during interpolation and you're just taking chances of how bad these inconsistencies are. Theoretically the filter can also explode, esp. at high resonance. Maybe in reality in doesn't, but you can't be really sure.

What one could do, however, is interpolate the prewarped cutoff. This only creates distortions in the cutoff curve itself, but no inconsistent coefficient sets. Notice that linear interpolation of cutoff (pre-prewarp or post-prewarp) may introduce subtle artifacts into the sound, like linear interpolation of digital delay's time can at least theoretically create zipper steppiness of perceived pitch (although the effect is usually much less noticeable).

Post

I did note linked (matrix) coefficients and suggest minimizing error. In the worst case where the error due to linear interpolation between two neighboring table indices leads to destabilization (it generally won't, but yes it can) simply don't interpolate. I'm not merely suggesting dysfunctional ideas, I'm telling you how my own code works successfully and has for decades.

The greatest issue is where you have non-linear interdependent coefficients. Those can never be stored accurately in a table together (the table would be too large, and the non-linearity may differ between the two) so one will always need some computation from the other. In some cases though the error can be shifted around to ensure stable, though inaccurate results. For example a feedback parameter doesn't need to be exactly precise so long as the relationship between frequency and feedback is accurate.

If for example the results are stable up to some limit, it's possible to use a very cheaply fitted approximation. So long as the approximation always gives a result under the limit the only consequence is behavioral error. For example the naive "state variable" implementation from chamberlin's paper is not stable above a limit for the cutoff coefficient or below a limit for feedback. So what I've used instead is the same applied to a ZOH input with a very rough 2nd order (quadratic) feedback limit. This allows the cutoff coefficient beyond nyquist while ensuring stability. The more expensive frequency coefficient can be sourced from a table, with the approximation applied with two multiplications.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

ajl1606 wrote: Sat Mar 18, 2023 11:30 pm aciddose - are you suggesting that the parameter value is fed directly into the filter coefficient calculation, and that the coefficients themselves will interpolate (?) towards their desired values over time?
This can also be done if you have a filter structure that tolerates it (hint: use the trapezoidal SVF), but as aciddose pointed out, you should approach this whole thing more like "what things need to be smoothed/interpolated where" rather than "where do I place my smoothing."

One possible engineering design might be to (1) smooth GUI/automation events using linear interpolation (2) generate modulation at some slightly reduced rate like 1/16 samplerate (3) interpolate the modulations to audio rate (4) interpolate to audio rate (eg. linear interpolation), perhaps smooth with a one-pole just a little bit (round those corners), then do filter coefficient computations at this rate (5) interpolate the coefficients to oversampled filter computation rate.

That's not the only possible design. We could move coefficient computations (or perhaps only part of it!) to modulation rate, we might skip interpolation to oversampled rate (the audible difference is there, but it's minor), or we might .. choose to go for quality at all cost and do everything at oversampled rates or whatever. The point though is that if we think about these control values as signals in their own right, we almost always end up doing smoothing and/or interpolation in multiple places... and that's fine.

Post

mystran wrote: Mon Mar 20, 2023 9:01 am One possible engineering design might be to (1) smooth GUI/automation events using linear interpolation (2) generate modulation at some slightly reduced rate like 1/16 samplerate (3) interpolate the modulations to audio rate (4) interpolate to audio rate (eg. linear interpolation), perhaps smooth with a one-pole just a little bit (round those corners), then do filter coefficient computations at this rate (5) interpolate the coefficients to oversampled filter computation rate.
The question is: Recalculating a SVF cutoff is basically a simple table-lookup,
is it possible that these 6 steps of computation are less efficient than just computing the cutoff at audio-rate?

(to be fair, you would still need to do step 1)

Post Reply

Return to “DSP and Plugin Development”