VST pitfalls with parameter synchronization

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

I've always ignored some pitfalls with the VST 2.4 spec and its parameter synchronization, but I am addressing them right now.

Lets say a user turns a knob in the GUI, changing its values from 0.1 to 0.2 and then to 0.3. We'll redraw the GUI control and report the new values to the host sequencer and to our audio components.

Some hosts will send the parameter changes back to our plugin, some do it only a bit later, and some might even do it while we're in the middle of processing audio.

So the host might tell us to set the parameter to "0.2" and a bit later to set it to "0.3" - all while our parameter is already at 0.3. So our parameter will jump back to 0.2 for a bit and then to 0.3. So instead of values 0.1, 0.2, 0.3 our parameter will have values 0.1, 0.2, 0.3, 0.2, 0.3.

Also, a host might report parameter changes while we are doing our audio processing, which can lead to unwanted side effects.

It's such a pity that Steinberg did not simply define that parameter changes need to be sent right before the audio processing call and that parameter updates may not be sent back to the plugin.


But well, we need to work with these pitfalls and circumvent them. We could (A) work on a copy of all parameters which will stay unaffected during processing or (B) queue all parameter changes in a buffer that we only apply at the beginning of audio processing. Parameter changes that occur during audio processing will thus be buffered and only applied with the next call; the buffer needs to be thread-safe and preferably lock-free.

Furthermore we need to avoid parameter jumping back to a previous value. For doing so, we are now blocking parameter changes for any controls that a user changes in the plugin GUI for as certain time and then only take the last value that got sent from the host. We are right now blocking for 1 second; in our example, the host will have sent parameter value 0.2 and 0.3 in that time, and we will only take the last (0.3); so no jumping. As a side effect, the parameter won't update for 1 second if automation is applied by the host while the user turns a knob, which is actually a nice thing.

Although I have this solution sketched out and am just implementing it, it just does not feel right. Am I missing something?

Post

Doesn't setParameterAutomated in the SDK call setParameter directly? Why would a host echo that at all? I certainly don't remember noticing any hosts do it, but then again I've not paid much attention either. So can't comment on any of the "jumping around" stuff, because I've never ever observed any.

Anyway the working "private set" for processing works like a charm in most cases. If you make that set persistent, you can also set flags (one per parameter) to indicate whether the variables have changes since the last process-call (assuming you sync once per process-call). Then those flags you can use to skip expensive recalculation, so you kinda get the benefits of a queue too (sort of anyway).

Post

If your parameters are atomic variables, there's probably no problem at all. You're imagining this "issue" for the most part, Jakob. (Take my Poly-Ana for example. Very big synth architecture, but all my parameters map to atomic variables, and they can change at any time without hurting anything.)

Yes, if you have non-atomic parameters you need to schedule their updates, probably at the beginning of a process call. That's YOUR responsibility, the host and API don't need to be aware of it, and it's easily managed simply by setting an atomic variable that says it's time to swap, or whatever, your non-atomic data. Exactly as mystran says.

And yes mystran, setParameterAutomated immediately passes it on to effect::setParameter...

VST 2.44 SDK...

Code: Select all

void AudioEffect::setParameterAutomated (VstInt32 index, float value)
{
	setParameter (index, value);
	if (audioMaster)
		audioMaster (&cEffect, audioMasterAutomate, index, 0, 0, value);	// value is in opt
}
There's nothing inherently wrong with this particular paradigm. Don't think too much. If this didn't work, nothing would work.

Post

The host could interrupt process() for a setParameter() call?

Post

camsr wrote:The host could interrupt process() for a setParameter() call?
It can happen on a different thread. Yes. And it does happen, as the editor is typically on a different, lower priority thread than the audio process, and calls effect::setParameter() directly.

Post

Thanks all for chiming in here!

So lets say the parameter in question is the pitch of an oscillator of a software synth. And lets assume these steps during processing:

1. The host calls our process code. Some pre-calculations happen based on the current pitch of +/- 0 semitones. For example we might determine the size of a buffer containing a waveform and the step-size for iterating through it.

2. Still in process, the host tells us to set a new pitch which is +12 semitones. We do so immediately.

3. Our OSC code gets an array containing the waveform data. Based on the current pitch (which is now +12semitones) this is 1024 samples long. We use the precalculated data which says: Waveform length is 2048 samples, step size 1.8, and our current reading position is at 1023.3 samples.. and boooomm, we'll crash in the next sample, or at least render invalid audio.


That's just one example, and of course we could implement a more robust code in the oscillator, which retrieves all values at the very beginning. But I want to keep programming here easy to do and, more importantly, secure. Error like these are easy to happen and to overlook, as they might only pop up very rarely. So I want the framework as such to be as robust as possible. So, atomic variables would not solve the issue here, would it?

Setting a lock around the variable for as long as the process is running would not be an option, I guess, it might block for so long.

Post

You can spin, but on systems that aren't optimized well (big AV packages installed etc), this eats a lot of CPU and may even break the DAW at low latencies.

What I'm currently doing is to employ a "dirty" mechanism, where I use InterlockedExchange() to signal bigger tasks for the engine to perform. So the crash you describe cannot happen, because the work will be performed just before the rendering and not in the UI thread. However atomics may still be accessed by both the GUI and engine (I see no way to prevent that, except by spinning).

Richard
Synapse Audio Software - www.synapse-audio.com

Post

Just copy the parameter value at the start of your process function to a temporary variable. Then the rug can't be pulled out from under you.

You could also re-copy it every sample of your sample loop, if you needed that resolution.

One line of code.

:shrug:

Post

Why are you changing the waveform buffer with pitch? Buffers should be static (unchanging) through the life of a subroutine. You can still change pitch by scaling the counter.

Post

camsr wrote:Why are you changing the waveform buffer with pitch? Buffers should be static (unchanging) through the life of a subroutine. You can still change pitch by scaling the counter.
Did you consider things like band-limiting? Changing the "read speed" only works for a limited range.. at some point you need to consider aliasing.

That said, keeping the buffer length fixed isn't necessarily totally stupid idea.. because for high frequencies some (or a lot) over-sampling helps keep interpolation noise down, where for low-frequencies it's not such a deal, because the noise is going to be lower anyway. [downside is wasting storage space, if the waveforms aren't calculated on the fly]

Post

mystran wrote: Did you consider things like band-limiting? Changing the "read speed" only works for a limited range.. at some point you need to consider aliasing.
Yes I did. Don't know much about the topic, but he is describing an error like buffer overrun. The whole approach looks reversed to me, why resize a buffer when you can interpolate?
Jakob / Cableguys wrote: 3. Our OSC code gets an array containing the waveform data. Based on the current pitch (which is now +12semitones) this is 1024 samples long. We use the precalculated data which says: Waveform length is 2048 samples, step size 1.8, and our current reading position is at 1023.3 samples.. and boooomm, we'll crash in the next sample, or at least render invalid audio.
I suppose the interpolation/oversamping is with the resizing of this buffer?

Post

camsr wrote:
mystran wrote: Did you consider things like band-limiting? Changing the "read speed" only works for a limited range.. at some point you need to consider aliasing.
Yes I did. Don't know much about the topic, but he is describing an error like buffer overrun. The whole approach looks reversed to me, why resize a buffer when you can interpolate?
Jakob / Cableguys wrote: 3. Our OSC code gets an array containing the waveform data. Based on the current pitch (which is now +12semitones) this is 1024 samples long. We use the precalculated data which says: Waveform length is 2048 samples, step size 1.8, and our current reading position is at 1023.3 samples.. and boooomm, we'll crash in the next sample, or at least render invalid audio.
I suppose the interpolation/oversamping is with the resizing of this buffer?
Yes, I should have been more clear hear. So depending on pitch different pre-calculated buffers are taken. If pitch goes up by one octave, the buffer size gets divided by 2. But... that was just a very simplified example, no interpolation taken into account, just to show a possible crash.

Post

AdmiralQuality wrote:Just copy the parameter value at the start of your process function to a temporary variable. Then the rug can't be pulled out from under you.

You could also re-copy it every sample of your sample loop, if you needed that resolution.

One line of code.

:shrug:
Hey AdmiralQuality, I love to do mistakes. So I don't want to hand-pick the values that may not change during the processing, but to simply ensure that all values stay unchanged.

So if I do this by copying all values beforehand, I'd need to copy 2500+ parameters (for Curve) at the beginning of the <process>. I's even worse, as our parameter values are not just stored in one simple array, but wrapped in bigger objects. We could rearrange the value storage to an array.. but still, this would give a <memcpy> of 10kB for every single <process> call.

Post

Richard_Synapse wrote:You can spin, but on systems that aren't optimized well (big AV packages installed etc), this eats a lot of CPU and may even break the DAW at low latencies.
Hey Richard, you placed this idea spinning in my head and then every other task I worked on took longer than normal, because this idea kept asking me to think about it, but yes, so, other tasks finished:

Just checking if I understand you correctly: The <process> and <setParameter> would both set a thread-safe flag while running to ensure that they do not run at the same time.

I'd expect that having <process> wait for <setParameter> should not be an issue if <setParameter> has only little stuff to do.

But lets say <process> gets called and sets the flag first. Then <setParameter> gets called, checks for the flag and starts spinning until the flag is unset.

To optimize, we can change it to spin for a bit only and if the flag is still set, change to a mutex. I guess this would be best to find a balance between fast processing in most cases, when <process> is not running, and medium performance on unoptimized systems or when <process> takes its time.

So in the worst case the user moves a knob which calls <setParameter>, it takes its time until the parameter can finally be set, and the UI might stutter. So how bad will this stutter be? Lets say our plugin uses 25% CPU, and has to process 512 samples at a sampling rate of 44100 samples/s. So that means the processing takes around 3 ms (if not interrupted) and if we're unlucky the UI thread will be blocked for 3 ms. I guess this would not hurt?

Of course spinning + mutex will add to the cpu usage, but I wonder if this can be that bad. I guess I should profile this..
Richard_Synapse wrote:What I'm currently doing is to employ a "dirty" mechanism, where I use InterlockedExchange() to signal bigger tasks for the engine to perform. So the crash you describe cannot happen, because the work will be performed just before the rendering and not in the UI thread. However atomics may still be accessed by both the GUI and engine (I see no way to prevent that, except by spinning).

Richard
What do you mean with "signalling bigger tasks"?

Post

mystran wrote:Doesn't setParameterAutomated in the SDK call setParameter directly? Why would a host echo that at all? I certainly don't remember noticing any hosts do it, but then again I've not paid much attention either. So can't comment on any of the "jumping around" stuff, because I've never ever observed any.
Hey mystran, yes, some hosts echo the <setParameter> call.. Ableton Live 8 does for example, at least on Windows.

@all: So even if we succeed in getting our interal parameter values to not change during <process>, by either copying their values at the beginning of <process>, or by scheduling their updates to happen only at the beginning of <process>, or by setting a lock around the whole <process> and <setParameter>:

We'd still have the problem of parameter values toggling back and forth. See my example in the very first post: the parameter might be moved by the user from 0.1 to 0.2 to 0.3, and will temporarily jump back to 0.2 when the host calls <setParameter> and echoes the parameter change, and then a little later again to 0.3.

I know.. this flickering is maybe not a high-prio problem, but still it would be nice to solve it. If we are unlucky, the parameter of our example will be set back to 0.2 just before the process call and 0.3 will just be set for the next process call - so we'd get a full audio buffer with a slightly wrong value.

Post Reply

Return to “DSP and Plugin Development”