Control Rate frequency for Filter Cutoff modulation

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hi guys,

I am currently reworking the filters in my framework and would like to get a best practice value for the filter coefficient update frequency.

It's using (mostly) State Variable Filters and the modulation data exists on sample level. Obviously, calculating the coefficients for every sample is overkill (performance is critical here, because they are applied polyphonically on multiple channels). Still I need the envelopes to be snappy so that eg. short attack times are possible.

I had a rather complicated system that analyses the "movement" in the modulation data and calculates a dynamic sample amount that divides the audio block depending on the modulation data (so if the modulation range is big, it returns fewer samples (down to 16 samples) and if there is hardly any movement (eg. in the sustain phase of an envelope), it just returns the whole buffer size. Then it chops the buffer into segments of the given size, and calculates the coefficients for each segment.

However this calculation (which needs to scan the modulation data for min and max values) might be more inefficient than if you just return a sensible default value (eg. 32 samples), so I'd like to get a few opinions on possible solutions. Do you have any best practice recommendations to handle this situation?

Post

You want the best-practice? Update filter coefs at the full sample rate. Plenty of Synths do so, some like Waves Element 2 even update the SV filter (including coefs) at 2x the DAW sample rate. Anything less will alias more when modulated. If you want the gold standard you can also low-pass filter the modulation signal a little too (this also reduces aliasing).

Post

You certainly want to update the filter coefficients for every sample you process (which means at oversampled rate if you oversample). The question then is whether to do full audio-rate modulation, or process the modulation on some larger block sizes and interpolate the in-between values. Modulation rates like 1:16 used to be common, but depending on the complexity of your modulation scheme, full audio rate is not necessarily impossible either (though, expect to spend up to 50% of your total CPU on modulation then). There's also a few choices to be made in terms of where you would interpolate (if you choose to interpolate), but whatever you do, you want the coefficients changing smoothly, anything else sounds like garbage.

Post

full audio rate is not necessarily impossible either (though, expect to spend up to 50% of your total CPU on modulation then
As a rough guide, a SynthEdit ZDF State Variable Filter with audio-rate modulation consumes 0.05% of my CPU. i.e 1/2000th of one CPU core (intel i7 @3.4 GHz). So that's not excessive IMHO.
Also straight audio-rate modulation can actually be simpler to code than trying to do modulation every 32 samples or so. You will have to spend some effort optimising away slow math operations though.

Post

You want the best-practice? Update filter coefs at the full sample rate. Plenty of Synths do so, some like Waves Element 2 even update the SV filter (including coefs) at 2x the DAW sample rate. Anything less will alias more when modulated.
Rly? Why?
Samplerate of a signal derives from what frequencies you want to transport on it.
Modulation signals rarley go above 100Hz for me, so what's the point of sampling&processing this at 48kHZ (or even 2 x 48khz?? oO)?
If you sample it at 1kHz you have 5x oversampling on your 100Hz modulation signals. If you do 10kHz you have 50x oversamlping... So easy to solve the aliasing by oversampling.
When you apply modultion you upsample to audio rate (to avoid zipper noise)

What's the reason to run modultion signal on audio rate (or even above), other than beeing a lazy dev?
I mean, what's the bandwith of a modultion signal on this special synths? Don't get why one would want to (over-)sample modultion at a 2x audio rate.. do you run 48kHz modulation signals ? :o

Post

PurpleSunray: if your modulator is a sawtooth wave, or a square, or <insert waveform here>, even at 100Hz, it'll contain harmonics.. because it's not just a 100Hz sine wave
thus, it makes perfect sense (to me) for the modulation signals to be ran at least at the Host Fs
It doesn't matter how it sounds..
..as long as it has BASS and it's LOUD!

irc.libera.chat >>> #kvr

Post

PurpleSunray wrote: Samplerate of a signal derives from what frequencies you want to transport on it.
Yes, and envelopes and LFO waveform discontinuities (not just hard-edges, but also things like triangle corners) have infinite bandwidth. The question is not about having enough bandwidth to actually carry the desired signals (because we can't have that), the question is how much approximation (=smoothing) we can accept before it starts to sound audibly sluggish.

Post

mystran wrote:
PurpleSunray wrote: Samplerate of a signal derives from what frequencies you want to transport on it.
Yes, and envelopes and LFO waveform discontinuities (not just hard-edges, but also things like triangle corners) have infinite bandwidth. The question is not about having enough bandwidth to actually carry the desired signals (because we can't have that), the question is how much approximation (=smoothing) we can accept before it starts to sound audibly sluggish.
Ofc, that's my point.
What I'm asking for is the reason to run modulation signal at 2 x audio rate.
Why even differ between if you do that? Your modulation signal obviously does carry audible frequency content, so it an audio signal.
I mean, the point of speration between modulation and audio signal, is that modulation usually only carries low frequencies. So you can use a lower sample rate and spend less CPU on it.
If modulation sample rate and audio sample rate is same, or modulation is a multiple of audio rate .. that asumption from above is not true anymore and all this thread is pointless
=>
Process modulation like audio, no difference if both run same rate.
Last edited by PurpleSunray on Mon Aug 21, 2017 6:30 pm, edited 1 time in total.

Post

antto wrote:PurpleSunray: if your modulator is a sawtooth wave, or a square, or <insert waveform here>, even at 100Hz, it'll contain harmonics.. because it's not just a 100Hz sine wave
thus, it makes perfect sense (to me) for the modulation signals to be ran at least at the Host Fs
I know. I'm not about to argue DSP basics here.
I simply want to know what's the point of oversampling your 100Hz square by x240 (@48Khz).
You won't 'eleminate harmonics', it's a square wave => infinte sample rate needed
x240 oversampling sounds like huge CPU waste to me

Post

@Chrisboy2000
Define a max frequency for your modulation input.
Then take the closest divisor to audio rate that is easy/fast to upsample (apply to audio at audio rate).
Example:
You want snappy envelope.
10ms is a pretty harsh cut.
If you go below it's a click, no more envelope (even 10ms would be more of a click).
So you need 200hz (1/0.01 x 2 ) sample rate.
We don't want to run aliasing-filter and stuff, so we do oversampling.. by let's say 32x?
So it's 6400Hz.
Divide audio sample rate with it: 48000/6400 = 7.5.
=> use 1/8 audio rate as modulation rate for fast upsample and 10ms snappy envelopes.
Limit envelope speed (>10ms/<100Hz), do less oversampling and/or low-pass filter if you want to further decrease modulation rate (i.e. because of complex processing on modulation signal)

Post

The question is not purely bandwidth. Besides infinite bandwidth you have to consider:

- by modulating at a lower rate you're implicitly applying sample-and-hold to the signal, making its spectrum effectively infinite. Even for sine waves. Although it does roll off, as for sawtooth, pulse etc.

- modulation is not the same as a mixture of two signals, it's more like a ring modulation (in the absolutely simplest case), with all the respective effect on the spectrum extension

- often modulation will contain implicit or explicit signal shaping. E.g. a sine wave modulation of the filter cutoff will go through a bilinear transform prewarping function, again making the spectrum infinite.

This is not to say you can't modulate at lower rates, since the signal spectrum rolls off with frequency, but you should be aware of the tradeoffs and ideally make a listening comparison, as to whether the differences are audible and whether you're willing to accept it. Preferably include experienced synth users in comparison, what might not sound critical to you, could turn out critical for other people.

Edit: worst case (probably) is explicit or implicit phase modulation (including delay modulation, effects of changed group delay of filters, pulse width modulation), because there we perceive not so much the phase, but rather its derivative (frequency). Differentiation, as we know, effectively boosts the spectrum proportionally to the partial frequency, making the rolloff less steep by 6dB/octave.

Edit2: as for 10ms being a click, I don't remember the absolute values of envelope times, but I think 10ms is not necessarily just a click. IIRC, especially with exponential decays, when used for cutoff or pitch modulation it can be a thump, with critical differences in sound being still audible below 1ms. However this can strongly depend on the material and the DSP components in use.

Post

Z1202 wrote: by modulating at a lower rate you're implicitly applying sample-and-hold to the signal, making its spectrum effectively infinite. Even for sine waves. Although it does roll off, as for sawtooth, pulse etc.
No idea what this means. You talk about harmonics? What sample-and-hold?
When you apply the modulation to audio signal, you upsample (aka interpolate) to higher rate (which is the audio rate usually). No sample-and-hold.
Z1202 wrote: modulation is not the same as a mixture of two signals, it's more like a ring modulation (in the absolutely simplest case), with all the respective effect on the spectrum extension
Modulation signal, by definition, is a signal that does modulates another.
If it does not modulate another, it is no modulation signal.
Doesn't matter if you do gain-modulation, or frequency-modulation or amplitude modulation or filter-cutoff or ring modulation or ..
Z1202 wrote: Edit2: as for 10ms being a click, I don't remember the absolute values of envelope times, but I think 10ms is not necessarily just a click. IIRC, especially with exponential decays, when used for cutoff or pitch modulation it can be a thump, with critical differences in sound being still audible below 1ms. However this can strongly depend on the material and the DSP components in use.
Then design your modulation channel differently.
The whole dicussion is about that is pointless to throw a "use 1:8" or use audio rate, or use double audio rate as modulation rate. It all depends on what your modulation signal is. In my example the limit is defined as running a 100Hz square wave on it. To identify it is a 100Hz square and not a sine, you need [1][1][-1][-1] samples. So 4 of it, 2 to store the period and 2 for nyquist -> 200Hz sample rate. We solve aliasing by oversampling.
If you need to handle 1000Hz square waves (1ms period), design the modulation channel for 2000Hz rate.

Chrisboy2000 wants to do filter cutoff modulation and IMHO nobody can hear if that 100%->0% filter cutoff modulation within 10ms is linear or exponential. If you do that as a gain modulation it's like "music -> silence", shortly before you start to produce a click instead of a ramp-down (to avoid click). (default value for ramp-down to avoid click on end-of-stream in our engine is 20ms (very conservative). if you reduce the ramp to below ~5ms you can skip it completly, it will click also with running the ramp)

Post

PurpleSunray wrote:To identify it is a 100Hz square and not a sine, you need [1][1][-1][-1] samples.
Contrary to the intuitive understanding, the sequence you mention doesn't identify a square but a sine. At least in the "default DSP signal encoding scheme", which is PCM. If you don't believe me, I might suggest that you read more about the sampling theorem. Or just construct such signal and upsample it in any decent audio editor.

Post

Z1202 wrote:
PurpleSunray wrote:To identify it is a 100Hz square and not a sine, you need [1][1][-1][-1] samples.
The sequence you mention doesn't identify a square. It identifies a sine. If you don't believe me, I might suggest that you read more about the sampling theorem.
Pls dude, take a pen and a paper and and draw this:

|* *
|____* *

Now try to draw 2 sinewave cycles (nyquist.. but you know sampling theorem as it seems) that passes through all of the 4 points.
Please post your result.
My sine wave would be more like
|.....* .......*
|*____*

Post

PurpleSunray wrote: Now try to draw 2 sinewave cycles (nyquist.. but you know sampling theorem as it seems) that passes through all of the 4 points.
There are no square waves in digital audio anyway, it's all a lie. There are truncated harmonic series that approximate square waves by a finite sum of sines, but that's it.

Your example [1,1,-1,-1] gives a sine when you interpolate the samples properly (using band-limited interpolation), the sine-wave will be at frequency of 0.25 times modulation rate (since the period is 4 samples) and the peak amplitude will be sqrt(2), since curious this is one of the often used cases to demonstrate "inter-sample peaks". Also in this case it isn't even a truncated series (it's just a single sine), because the second harmonic would be at Nyquist exactly and we can't have sines there (we could have a cosine, but we want a sine series for square waves and exactly at Nyquist we don't get a choice of phase).

Post Reply

Return to “DSP and Plugin Development”