VST3 SDK: separation between Editor and Audioprocessor

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Sometimes the GUI must know data from the Audio-thread like for drawing a waveform or a spectrum. In VST 2.4 i just used a pointer to the waveform in the audo-thread. This is not longer possible with the VST3 SDK.

The VST3 SDK sais:
"Private Communication
Data that is unknown to the host can be transmitted by the means of messages. The communication interfaces are

Steinberg::Vst::IConnectionPoint : The host establishes a connection between processor and controller
Steinberg::Vst::IMessage : Represents a message to send to the counterpart
Steinberg::Vst::IAttributeList : A list of attributes belonging to a message

Please note that messages from the processor to the controller must not be sent during the process call! This sending could be not speed enough and then break the real time processing. Such tasks should be handled in a separate timer thread."
What the ....? Create another thread just for displaying a waveform? Then pack a big amount of duplicated data into a stream and decode the stream again and waste lots of memory bandwith and CPU? I am wondering if the developers of the VST3 SDK ever created something bigger than a helloword example in practise.

What solutions did you do to get around this?

Post

I (and others) have struggled with this. (See this post: viewtopic.php?f=33&t=544032)
Here are two options:
1. Use a combined Processor/Controller : SingleComponentEffect
2. Use a IConnectionPoint to glue the Controller to the processor. (Sending a pointer from the Controller to the processor). After connecting, the processor can call any method of the controller. (Note that you then call from the Processor thread!, offcourse..) Juce uses this method.

Post

I also thought to use method 2). It's good to see that Juce also uses this.

Post

If you need to be lazy, please use the combined Processor/Controller combination. Sending a pointer thru the connection point is going to fail at that point when the memory space is not shared between them and you will cause a crash. Using the first option does not result in a crash.

Post

Hi Arne,

It's not as easy as you say.

please note that i am a full-stack and full-time developer with 19 years of experience in vst programming.
Porting our existing vst 2.4 products to VST3 means serious problems for us devs, since the format is not well-designed for complex plugins. And i am not the only one. That's why many devs sticked with vst 2.4 or used Juce (which seems to use a pointer to the audio-thread to get around the message-passing-party of vst3) .
Reference:
viewtopic.php?f=31&t=398532
It is extremely difficult to port complex vst 2 projects to vst3 (aside of many technical problems and several strange design-decisions with the vst3 sdk).
I made a poll some days ago. After more than a decade the majority of the users still prefer vst 2.4. Maybe Steinberg should just fix the design-flaws from the past and release vst 4. This time with proper midi-support, an sdk the compiles on common platform without hacks, a slim sdk with examples that work, a proper documentation and downward compatibility

Post

If you have problems with the SDK or API you may should post it on the Steinberg SDK forum. I see where you are coming from, but the VST 2 SDK is not well designed for complex plug-ins, but you have lived with its flaws or have ignored it all the time. Just as an example as this is one point of your initial post: The waveform in the audio thread. You live with the flaw that what you display in your plug-in editor is not always what the user hears at that point of time, because what you calculate in the audio process block may come more than your usual 2 to 10 ms later out of the speakers depending in the audio graph. Just place a plug-in with a high delay after your plug-in in the graph. I have a few that can have a delay of more than 1 or 2 seconds. And that's just one of the design flaws of VST2. Speaking of MIDI: The new MIDI 2 stuff will never work with the way VST 2 is designed, as VST3 supports nearly all important stuff of MIDI 2 out of the box (p.e. controllers per note, higher value precision, etc).

Post

Hi Arne,

vst 2.4 is working fine since many years. Our products are very stable and working reliable in all known hosts. I do not see any serious 'design-flaws' in vst 2.4. It's of to the developer to sync threads and make sure that nothing crashes. Of course VST2 does not implement the academic 'model-view-controller' philosophy in a strict way. But it just works very well in practise. For the users as well for the devs - since many years.

Midi 2.0 will work in vst 2, since it is downward-compatible with midi 1.x. Vst3 does not midi-messages without tricks. I am still wondering about the modwheel... It was easy to route the modwheel in vst 2, but in vst3 it seems to big a major hassle. Same goes for midi-learn.
As an vst-interface-designer you should be aware of this.
Instead of defending vst3 at its current state please recognize the demands of the users and developers and enhance your interface for the future. Please abadon vst3 and create vst4. We devs need something that works in practice with a reasonable amount of development effort. We got families to feed and need to make a living. The customers need something that is reliable, has low cpu and where they can load their old projects without hassle.
The success of of JUCE should proove that my concerns are not just the ones from a single developer

Post

Ugh. Message passing? Completely unnecessary. With VST 2, my audio processor knows the pointer to the current GUI and the GUI knows the pointer to the main audio processor. They just set values/call methods as needed. One more thing I'm going to have to worry about when I get into VST 3.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

AAX and AudioUnits use exactly the same message passing paradigm.
Nobody seems to complain about this, so I wonder why VST3 gets all the flak.

Post

Sounds like the same kind of issue as back in the time, when switching from a fixed-hardware single-user computer with minimal OS to a multi-user OS with memory protection. Developers complained that they couldn’t POKE hardware anymore, have to malloc memory, why is it so difficult to do simple tasks, etc.

Post

Fire Sledge - Ohm Force wrote: Thu May 28, 2020 1:26 pm Sounds like the same kind of issue as back in the time, when switching from a fixed-hardware single-user computer with minimal OS to a multi-user OS with memory protection. Developers complained that they couldn’t POKE hardware anymore, have to malloc memory, why is it so difficult to do simple tasks, etc.
Because... uh... abstraction? You can poke all the memory you want to. You just have to own it first! :lol: It really is nothing new. Just some programmers learned quick and dirty BASIC programming, complete with peeking and poking about in undocumented (for a reason!) memory locations, on their 8-bit desktops and learned they should have total control over the system. It's nice when you're writing Forth routines to position your telescope with infinitesimal exactitude, but you're going to get spanked when you start writing for a multi-user system.

I sometimes think that Steinberg purposely obfuscates their SDKs. 3 years in and I'm still discovering the most basic of things about VST2 that should have been on page 1 of the VST-writing manual. Despite some of their weird choices in trying to cram a C++ framework onto a C backend. We all know how well that worked out! I can see the slim, (somewhat) speedy functional framework being completely covered up with hack after kludge after hack.

I'm really going to have to start picking apart some VST 3 examples to see what's been improved. But every time I try to tackle the AU format, my brain just melts and seeps out my left ear... meh...
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

Eduur wrote: Wed May 27, 2020 10:25 am 2. Use a IConnectionPoint to glue the Controller to the processor. (Sending a pointer from the Controller to the processor). After connecting, the processor can call any method of the controller. (Note that you then call from the Processor thread!, offcourse..) Juce uses this method.
This will only work if Editor and Processor run in the same process. But in that case why would you even want to separate Editor and Processor? That makes no sense.

Post

lorcan wrote: Wed May 27, 2020 9:04 pm AAX and AudioUnits use exactly the same message passing paradigm.
Nobody seems to complain about this, so I wonder why VST3 gets all the flak.
I thought the "de facto standard" way for handling this in AU is to define a custom property for the GUI to fetch a pointer to the processor, then ignore the whole separation.

Post

syntonica wrote: Thu May 28, 2020 6:58 pm Despite some of their weird choices in trying to cram a C++ framework onto a C backend.
As it turns out, the problem in VST2 has very little to do with the C backend (which is actually very easy to wrap around) and everything to do with the C++ framework which is mostly just useless junk.

Post

mystran wrote: Mon Jun 01, 2020 11:55 am
syntonica wrote: Thu May 28, 2020 6:58 pm Despite some of their weird choices in trying to cram a C++ framework onto a C backend.
As it turns out, the problem in VST2 has very little to do with the C backend (which is actually very easy to wrap around) and everything to do with the C++ framework which is mostly just useless junk.
That's what I said! :P
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post Reply

Return to “DSP and Plugin Development”