Specific technical questions about VST3, and info for users

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Do you mean the wrapper that wraps a VST3 implementation as a VST2? I'd rather stay away from it...

Post

Yes, but isn't the only way today to compile a VST2 when you are a newcomer in plug-in development and that you have downloaded the VST3 SDK only ?

EDIT : I forgot that the current VST3 SDK download link provides both the VST3 to VST2 wrapper AND the old VST 2.4 SDK. That's the right time to backup these files it seems.
Last edited by Ivan_C on Mon Jun 11, 2018 2:27 pm, edited 1 time in total.

Post

Ivan_C wrote:Quick question for the developers who are not using JUCE : are you still the using the "old good one" VST 2.4 SDK for developing VST2 plug-ins ? Or the VST3 SDK wrapper for VST2 ?
Still using the original VST 2.4 SDK.
Less CPU overhead and code bloat, plus it's good to have full control over the max amount of code,
and not depend on Steinberg adding feature/bugfixes in their VST2/VST3 bridge.

Post

VST3 is an inevitable evil. Not very well-thought out from API complexity and compatibility point of view. VST2 could have been improved by adding a couple of API functions for sidechain inputs and sample-accurate automation. Decoupling of UI and DSP was always a bad idea, it's good only from academic perfection point of view. Now we have to push megabytes of data between UI and DSP to draw spectrum analysis view.

Dispatcher function is not an issue. First of all, compilers optimize switch() structures, secondly dispatch functions are called infrequently. Other operations have direct entry functions. I think VST2 was well thought-out from the very start: dispatch for infrequent operations, separate entry functions for frequent operations.

Well, as now Steinberg loses market share in a major way, things may change one more time with plugin formats in the future. I'm ready to move to VST2-inspired format with many vendor-specific enhancements implemented over time (e.g. we support some enhacments for Reaper).

Internet is globally unreliable, I do not know who will risk to use SoundGrid and the like in live performances, so UI and DSP decoupling effectively has no benefits.
Image

Post

I wrote VirtualCZ as a proper VST3 and wrapped with Steinberg's AUv2 and VST wrappers + my own AAX , I had to modify the wrappers quite a lot. The fact that VST3 is weird about MIDI CCs can be particularly annoying when you wrap to other formats. Hopefully this will change, but personally I don't want to depend on those wrappers, even though they are working pretty reliably in the end

Post

Aleksey Vaneev wrote:Decoupling of UI and DSP was always a bad idea, it's good only from academic perfection point of view. Now we have to push megabytes of data between UI and DSP to draw spectrum analysis view.
AAX and AU do it too, I'm not sure you could say Apple and Avid are academic perfectionists.
For one it forces a clean separation of concerns, which strongly encourages the developer to think about this in depth. If you really need direct communication both AAX and VST3 allow this (SingleComponentEffect or so).
For AAX DSP there is no choice as UI and DSP are running on different chips anyway.
It also opens up the possibility of an open-source standard for dedicated DSP offloading boxes ala Soundgrid.
Wouldn't that be nice ?

Post

lorcan wrote:
Aleksey Vaneev wrote:Decoupling of UI and DSP was always a bad idea, it's good only from academic perfection point of view. Now we have to push megabytes of data between UI and DSP to draw spectrum analysis view.
AAX and AU do it too, I'm not sure you could say Apple and Avid are academic perfectionists.
For one it forces a clean separation of concerns, which strongly encourages the developer to think about this in depth. If you really need direct communication both AAX and VST3 allow this (SingleComponentEffect or so).
For AAX DSP there is no choice as UI and DSP are running on different chips anyway.
It also opens up the possibility of an open-source standard for dedicated DSP offloading boxes ala Soundgrid.
Wouldn't that be nice ?
It's not nice in any way, even if many others forced or supported it. It introduces latency and overhead. It was a good idea in an era of DSP chips and single-core processors. Now with 32-core Threadripper to be released soon it's an absolutely insane idea. DSP chips or "offloading" performance can't rival today's Native in any way. And it's basically expensive.

Of course, I had to "jump around" AudioUnit, VST3 and AAX "ideology", but it's an inevitable evil, nothing techically reasonable about it. VST2 is reasonable in most of its aspects, or it would not be so widely-adopted by free plugin enthusiasts. Efficient, cheap to develop.
Image

Post

Aleksey Vaneev wrote: It's not nice in any way, even if many others forced or supported it. It introduces latency and overhead. It was a good idea in an era of DSP chips and single-core processors. Now with 32-core Threadripper to be released soon it's an absolutely insane idea. DSP chips or "offloading" performance can't rival today's Native in any way. And it's basically expensive.
Soundgrid servers are basically Linux boxes running standard Intel CPU's.
Some use cases (especially live / broadcast) require guaranteed, glitch free audio processing.
There nothing beats a separate box with redundant PSU's etc. If the GUI / controller goes down for some reason, the show can go on.

Native is great but DSP / FPGA chips without OS or using bare bones RT kernels are still king for specialized applications where performance per watt, minimum latency, etc. are crucial and price is a secondary criterion. I don't think a Windows OS would cut if for something like this http://www.avid.com/products/venue-s6l-system for example.

Post

lorcan wrote:
Aleksey Vaneev wrote: It's not nice in any way, even if many others forced or supported it. It introduces latency and overhead. It was a good idea in an era of DSP chips and single-core processors. Now with 32-core Threadripper to be released soon it's an absolutely insane idea. DSP chips or "offloading" performance can't rival today's Native in any way. And it's basically expensive.
Soundgrid servers are basically Linux boxes running standard Intel CPU's.
Some use cases (especially live / broadcast) require guaranteed, glitch free audio processing.
There nothing beats a separate box with redundant PSU's etc. If the GUI / controller goes down for some reason, the show can go on.

Native is great but DSP / FPGA chips without OS or using bare bones RT kernels are still king for specialized applications where performance per watt, minimum latency, etc. are crucial and price is a secondary criterion. I don't think a Windows OS would cut if for something like this http://www.avid.com/products/venue-s6l-system for example.
SoundGrid will fail if some intermediate Internet hub goes down, then there's a minimum of 5-10ms latency for turn-around. It's also not a good idea to degrade API performance of 99% of users for the sake of 1% of lucky millionaires.

vst4free Alexa rating is on-par with iZotope and Waves. Times are changing, money are leaving audio DSP and we are left with plugin formats that are not up-to-date to modern reality.
Image

Post

Aleksey Vaneev wrote: SoundGrid will fail if some intermediate Internet hub goes down, then there's a minimum of 5-10ms latency for turn-around. It's also not a good idea to degrade API performance of 99% of users for the sake of 1% of lucky millionaires.
They claim latency as low as 0.8ms here https://www.waves.com/soundgrid-101.
1.8ms interfaced with a Behringer X32 https://dbbaudio.com/2017/waves-x-wsg-latency/
Good luck achieving something like this running reliably with a native PC/Mac setup.

I don't think any pro would be stupid enough to open the dedicated audio LAN to the internet :wink:
In large setups I believe these AES67 type protocols actually give better reliability than plain audio cables - you can use multiple redundant paths and say bye-bye to bulky expensive snakes.

Post

lorcan wrote:
Aleksey Vaneev wrote: SoundGrid will fail if some intermediate Internet hub goes down, then there's a minimum of 5-10ms latency for turn-around. It's also not a good idea to degrade API performance of 99% of users for the sake of 1% of lucky millionaires.
They claim latency as low as 0.8ms here https://www.waves.com/soundgrid-101.
1.8ms interfaced with a Behringer X32 https://dbbaudio.com/2017/waves-x-wsg-latency/
Good luck achieving something like this running reliably with a native PC/Mac setup.

I don't think any pro would be stupid enough to open the dedicated audio LAN to the internet :wink:
In large setups I believe these AES67 type protocols actually give better reliability than plain audio cables - you can use multiple redundant paths and say bye-bye to bulky expensive snakes.
OK, I was envisioning an "over Internet" setup hence my latency estimation. To me, running SoundGrid locally is strange, if you care about power supply redundancy, why not build a native box with redundant PSU and some modern multi-core CPU, the latency will be effectively zero. I guess no marketing hype in that. Plugin servers may be redundant, but hubs and host computer are not, risks are not reduced.
Image

Post

Just an opinion, sample-accurate automation is actually also an "academic perfection" thing. I do not think I've ever seen a music project that has automation curves with 1ms precision or less. From overhead point of view, sample-accurate automation is pretty bad as you have to splice processing from event to event, and in my practice I can hardly convert sample-accurate automation into continuous control signals as that would generate even more overhead.

It was always a better idea to simply reduce the system-wide block size, or automatically splice processing to the required event resolution in a given plugin. It would be far more system-wide efficient than to force every plugin to cope with sample-accurate automation.
Image

Post

Aleksey Vaneev wrote:Just an opinion, sample-accurate automation is actually also an "academic perfection" thing. I do not think I've ever seen a music project that has automation curves with 1ms precision or less. From overhead point of view, sample-accurate automation is pretty bad as you have to splice processing from event to event, and in my practice I can hardly convert sample-accurate automation into continuous control signals as that would generate even more overhead.

It was always a better idea to simply reduce the system-wide block size, or automatically splice processing to the required event resolution in a given plugin. It would be far more system-wide efficient than to force every plugin to cope with sample-accurate automation.
It's up to you to use all automation points or use the last / average. Best of both worlds really.
And having a timestamp will make your project sound the same irrespective of the block size you're working at. It's quite common to change block size for tracking / editing.

Post

lorcan wrote:
Aleksey Vaneev wrote:Just an opinion, sample-accurate automation is actually also an "academic perfection" thing. I do not think I've ever seen a music project that has automation curves with 1ms precision or less. From overhead point of view, sample-accurate automation is pretty bad as you have to splice processing from event to event, and in my practice I can hardly convert sample-accurate automation into continuous control signals as that would generate even more overhead.

It was always a better idea to simply reduce the system-wide block size, or automatically splice processing to the required event resolution in a given plugin. It would be far more system-wide efficient than to force every plugin to cope with sample-accurate automation.
It's up to you to use all automation points or use the last / average. Best of both worlds really.
And having a timestamp will make your project sound the same irrespective of the block size you're working at. It's quite common to change block size for tracking / editing.
No, if you have to follow time stamps and expect different block sizes, you can't discard any automation points. That's why it could be better if host did splicing itself, plugins would not waste time on splicing logic, and if I recall correctly, some hosts had problems synching VST3 automation and process function event data (Parameter::changed and ProcessData.inputParameterChanges).
Image

Post

lorcan wrote:
Aleksey Vaneev wrote:Decoupling of UI and DSP was always a bad idea, it's good only from academic perfection point of view. Now we have to push megabytes of data between UI and DSP to draw spectrum analysis view.
AAX and AU do it too, I'm not sure you could say Apple and Avid are academic perfectionists.
At least one of the more popular frameworks (and I suspect this approach is also very popular among people who don't use frameworks), namely IPlug appears to handle this for AU (no idea about AAX, haven't looked at that) by having the GUI query for a custom property which is literally the pointer to the actual plugin object. Once you have the raw pointer, you basically give middle finger to the whole separation non-sense and just do it the more efficient way.

IMHO the big problem with ALL the plugin interfaces is that they were designed by idiots who thought they are supposed to dictate the internal architecture of the plugin rather than just providing an actual interface. The fact is, that's just not going to happen and the more you try to force it down the throat of the developers, the uglier the hacks you get as a result.

Post Reply

Return to “DSP and Plugin Development”