Specific technical questions about VST3, and info for users

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

mystran wrote:
Mark Vera wrote: So it's safe to say that you should think in Steinberg's perspective. Fixed size buffer process is only the official supported way as it was in VST2 (although many hosts and plugins support variable sizes).
Quoting VST 2.4 SDK (don't have the 2.3 available right now, but I believe it might have been even clearer there):

It's same in VST3, a bit different places. However their FAQ also states:

Code: Select all

Q: Can the sample block size change while the plug-in is processing?

No. When the host changed the processing block size, the plug-in is informed by the call of AudioEffect::setupProcessing. This call is always preceeded by a call of Component::setActive (false), and followed by a call of Component::setActive (true) So block size can change during runtime of a plug-in, but never during or between processing calls
If you think in Steinberg's terms, block size is same as the process call number of samples. As far as I've checked Cubase does it this way. If you inteprete this in Steinberg perspective, variable buffer sizes are "not" supported. Technically there's no difference from VST2. The block size is also same as the ASIO buffer size as Cubase only works with ASIO (last time I checked there was only the ASIO/DX adapter driver).

I'm glad there's only one process now. No separate process and processReplacing. Some VST2.3/VST2.4 plugins just don't know what's the difference on those..
jouni - www.markvera.net - Stardrive Studio - Orionology

Post

You think enum { True, False } is bad?

They forgot a field!
Apparently, there are some legal, mostly non-controverisal subjects you can't talk about on KVR, or even mention in your sig. Unfortunately, the guidelines don't actually talk about them, so the only way to find out is trial and error.

Post

Chris Walton wrote:Anyway, my question. Who had the absolutely wonderful (and I mean this in the most sarcastic way possible) of doing this:

Code: Select all

enum
{
	// ...
	kResultOk = 0x00000000L,
	kResultTrue = kResultOk,
	kResultFalse = 0x00000001L,
	// ... 
};
Chris Walton wrote: I'm sorry, but what the f**k!? This goes against EVERY programming convention regarding true/false EVER. False is 0, true is 1/-1/anything else. I cannot think of anything else ever that broke this convention, and Steinborg should take a good long hard look at the idiot in their ranks that thought this would be a good idea.

Oh goody, I can't write if(blah) or if(!blah) anymore :roll: I can just imagine the billions of bugs developers will have tracking down this shit :roll:
I can answer that one - they've been reading Roedy Green's most excellent essay, in particular this chapter..

;)

DSP
Image

Post

jwatte wrote:You think enum { True, False } is bad?
Since there's already a set of predefined constants which looks like enum bool { false, true /* is anything else */ } and since it breaks all C/C++ conventions... yes.

I've already worked with languages that define a word with all bits set as "false" and with all bits clear as "true", and the corresponding constructs "if boolean_variable" and "if not boolean_variable" worked just great. But - if you create an SDK that's specifically targeted at a C++ audience, you should at least define the basic constants in a C++-compatible way.

In this respect, the VST SDK 1/2 return code convention of returning -1 for "no", 0 for "don't know" and 1 for "yes" always was about as idiosyncratic as the definition of 0==true.
"Until you spread your wings, you'll have no idea how far you can walk." Image

Post

Thanks for clearing up those ABI things.. however, bearing in mind that libstdc++ is typically static-linked in a plug-in, doesn't that mean compatibility will be broken if the plug-in runs v6 and the host runs v7, or vice versa?
Where's the problem with variable buffer sizes? It's not like it's hard or anything
Depends what you're writing. If you're building DSP accelerator hardware (which might not be a big concern for the devs of KvR, but it's a big part of the market overall and v important for users), variable buffer sizes are an almighty pain in the ass. Basically what you end up having to do is buffer very conservatively (meaning much more latency) so as to feed your device a fixed buffer size. Seeing as what pretty much any audio engine is running off is ultimately a regular DMA interrupt from audio hardware (yes, I know there's a crapload of layers in between on a modern OS), fixed buffer sizes all the way is the sensible way to go IMHO. The FL guys only did differently to fix broken behaviour in specific plugins and VST itself.

mystran - the time info thing is a big deal when you are dealing with a multi threaded, delay compensated host audio engine.. a lot of hosts' VSTTimeInfo sync information completely breaks as soon as you switch on PDC and start adding a few UAD plugs or whatever. For tempo synced LFOs, it's no big deal, for plugins with onboard sequencing, it is a very big deal!

Rock Hardbuns - ASIO does guarantee constant buffer size, some other driver architectures do not.
This account is dormant, I am no longer employed by FXpansion / ROLI.

Find me on LinkedIn or elsewhere if you need to get in touch.

Post

Angus_FX wrote:fixed buffer sizes all the way is the sensible way to go IMHO.
Not only that, but buffer sizes should be power of 2 as well. It is what audio drivers usually implement AFAIK. Not that it matters much, but at least FFT-based convolution could benefit from it, to user's pleasure of having zero-latency convolution (there are other problems with convolution present, though, - namely, CPU spiking, but it's a different talk).
Image

Post

Aleksey Vaneev wrote:
Angus_FX wrote:fixed buffer sizes all the way is the sensible way to go IMHO.
Not only that, but buffer sizes should be power of 2 as well. It is what audio drivers usually implement AFAIK. Not that it matters much, but at least FFT-based convolution could benefit from it, to user's pleasure of having zero-latency convolution (there are other problems with convolution present, though, - namely, CPU spiking, but it's a different talk).
No, no, no and NO!

Fixed buffer sizes are totally NOT the way to go.

The very notion of an audio system having a fixed buffer size (and especially power-of-two) is deeply rooted in the days of PCI soundcards.

These days, with USB and FireWire soundcards taking over, this is no longer the case. Adapting between the non-constant buffer size on these external buses and fixed-size buffers that most audio systems require (CoreAudio, WDM, ASIO) is an unnecessary source of latency, and as a developer of sound card drivers I would love to see this outdated assumption disappear :-)

Most plug-ins don't have a requirement to operate on fixed size buffers, so why make all of them less flexible?

Come on guys, don't forget about the whole audio eco-system that plug-ins live in!

Post

Hi Nixim,

Not sure if this is the right place for it, but I'd be interested in some more tech details as to why the buffer size might be non constant over FW/USB? I mean, you are ultimately outputting audio at a fixed (aside from any sampleclock jitter) rate, and once back on the host machine, you want to be signalling on the interface between the kernel and user-mode driver components at a regular rate?

That said I don't know all that much about the technical details of what happens between the user-mode part of the driver (i.e. the ASIO callback), and the AD/DA on the hardware, so maybe you could enlighten me a bit?

Thanks,

Angus.
This account is dormant, I am no longer employed by FXpansion / ROLI.

Find me on LinkedIn or elsewhere if you need to get in touch.

Post

nixim wrote:Come on guys, don't forget about the whole audio eco-system that plug-ins live in!
I fully understand this system, but I also understand that it's built around a binary numbering system. That's why 'int' type's size increases by factor of 2, not by factor of 3 or any other factor.

Of course, flexibility is always tempting, but we should not forget that the more flexible system is the less control we have over it (it is a general truth we should not forget about especially when thinking about API).

If you can show some ASIO audio driver absolutely requires block sizes like 48 samples to have the lowest real-time latency, that will be a different talk.
Image

Post

nixim wrote: These days, with USB and FireWire soundcards taking over, this is no longer the case. Adapting between the non-constant buffer size on these external buses and fixed-size buffers that most audio systems require (CoreAudio, WDM, ASIO) is an unnecessary source of latency, and as a developer of sound card drivers I would love to see this outdated assumption disappear :-)
But USB and FireWire soundcards by themselves add latency to the equation so surely anyone wanting a low-latency system should go for a PCI card in the first place. (And I don't believe that PCI soundcards are anywhere near obsolete).

*scratches head*

Zoom

Post

Well, I don't particularly care for power of two or anything, but buffers that are nicely aligned on, say, a 64 byte boundary (if we ever see 8 x 64bit SIMD or anything) and have a length that's guaranteed to result in a multiple of 64 bytes, that would be something. Or, if we cater for today's needs, 16-byte alignment and multiple of 4 samples SP or 2 samples DP...

;) Urs

Post

ZooMatik wrote:But USB and FireWire soundcards by themselves add latency to the equation so surely anyone wanting a low-latency system should go for a PCI card in the first place. (And I don't believe that PCI soundcards are anywhere near obsolete).
As long as people are using only desktop computers, PCI is still a better solution cost and latency wise. I only have a laptop for example so I really can't, and I do value reasonably low latency.
never stop loving music.

Post

Urs wrote:Well, I don't particularly care for power of two or anything, but buffers that are nicely aligned on, say, a 64 byte boundary (if we ever see 8 x 64bit SIMD or anything) and have a length that's guaranteed to result in a multiple of 64 bytes, that would be something. Or, if we cater for today's needs, 16-byte alignment and multiple of 4 samples SP or 2 samples DP...

;) Urs
I agree, we should look into the future as much as we can. While I do not think 64-byte boundary will ever be necessary (I can envision that future processors' secondary cores will be fiber-guided, with limited local memory and a reduced instruction set), 16-byte alignment for buffers is a must today. So, if we choose free-size buffers in the end, 16-byte alignment should still be considered - it's an easy thing to do for any host.
Image

Post

z15 wrote: As long as people are using only desktop computers, PCI is still a better solution cost and latency wise. I only have a laptop for example so I really can't, and I do value reasonably low latency.
Point well taken. I forgot about laptops :roll:

Zoom

Post

Angus_FX wrote:Not sure if this is the right place for it, but I'd be interested in some more tech details as to why the buffer size might be non constant over FW/USB?
I'll try and give a quick overview of the main difference between PCI and USB/FireWire -

A PCI soundcard has a fixed-size buffers for the input and output streams.
Once the soundcard has filled the buffer (or needs the buffer filled) it fires off an interrupt so the driver can do its work.
So with these devices the buffer size is fixed, and the time between interrupts changes with sample rate.
This is the 'world view' that ASIO was built around.

With USB/FireWire, data transfers occur periodicaly (e.g. every 1ms for USB), and this minimum time interval is set in stone by the design.
So the audio driver can handle an interrupt (although in this case technically not hardware interrupts) at millisecond intervals (in the case of USB), and the sample rate therefore determines the size of the buffers.
For example at 48kHz a 1ms buffer is 48 samples, at 44.1kHz it get a bit more complicated - 9 x 44 samples + 1 x 45 samples.
And once you sync your soundcard to an external clock source, you're probably no longer running at a true 44.1 with respect to the computer's clock.

I agree that for ultimate low-latency and low-cpu overhead, PCI is the best way to go.
But then of course, sticking sensitive analog circuitry inside an electrically noisy computer case is never going to give the best noise performance! ;-)

Post Reply

Return to “DSP and Plugin Development”