aciddose wrote:Tail size is the signal you generate after the input is zero.
For example in an envelope generator, tail size would be the release time.
This is of major consequence in hosts that disable processing during silence. In these hosts, the processing will not stop until the input and output has been zero for at least (latency + tail size).
For example take a delay effect. Both the input and output might be zero. There may be a signal stored in the delay buffer however and up until the total delay time has passed you can't be certain it won't make its way to the output.
Each time a signal is detected at either input or output the silence counter is reset.
So, if we implement a “lookahead” buffer that delays the audio by N samples, then the correct Tail Size would be exactly N. That's because if the last non-silent sample is sent to the plug-in at time T, then another N (silent) samples need to be sent to the plug-in, in order to "flush" the pending outputs (it takes N samples for the last non-silent sample to propagate trough the FIFO buffer). Thus, the last non-silent sample will be returned from the plug-in at time T+N. Or in other words: The plug-in's response will not have "died to zero" (as the spec says) until another N samples of silence have been processed as input. That's because until then we still have non-silent samples in the buffer. Thus, the tail size must be N.MadBrain wrote:I think getTailSize() is really just to tell the approximate length of a reverb or delay time, so I'd be really surprised if it gave any sorts of exact results. After all, real time audio never really "ends", so in the normal context where you'd use a VST, your output is never really truncated, it just goes on "forever".
Still, this info seems kind of redundant! That's because the host application can deduce the same info only from the Initial Delay
I only see two possibilities here:
- InitialDelay(X) tells the host to discard the first X samples of the plug-in's output, but nothing else. Subsidiary, TailSize(Y) tells the host to feed additional Y samples of silence to the plug-in in at the very end, in order to flush the pending outputs.
- InitialDelay(Z) tells the host to discard the first Z samples of the plug-in's output and to feed additional Z samples of silence to the plug-in in at the very end, in order to flush the pending outputs. TailSize is not needed or considered at all.
Well, even if VST originally was designed primarily with real-time processing in mind, it apparently is the most widely supported "plug-in" type in Wave Editors and similar applications - at least on the Windows platform. Most applications of this kind either support VST or no plug-ins at all. Some have their own non-standard plug-in types, yes. But having to write a separate plug-in for each application is kind of unrewarding! Consequently, we have to work with VST for this purpose in practice. That's simply due to the lack of suitable alternatives! And with some "well-behaved" Editors, like Acoustica, WaveLab, Audition, GoldWave and REAPER this seems to work fine. If only there weren't those other editors...MadBrain wrote:VSTs are really designed for real time processing, not processing on fixed audio, so if I were you I'd look at other plugin formats.
Maybe I will have a look too. But if they are Audacity-only, their usefulness is limited. Do you know how difficult it will be call native C++ code from "Ladspa" or "Nyquist" plug-ins? Note that my code is written in the form of a general-purpose C++ library. I have written bindings for SoX and VST...MadBrain wrote:For instance, Audacity supports Ladspa plugins and Nyquist plugins (using some sort of LISP) and tbh I have no idea if these are better for non-realtime audio, but it might be a good idea to check.
Any more details or pointers on this?MadBrain wrote:I think VST itself also has a specific mode for non-realtime audio but I don't think it's well supported at all.
I cannot find anything about it in the specs:
http://ygrabit.steinberg.de/~ygrabit/pu ... fectX.html
[EDIT] Maybe you are referring to the wantAsyncOperation() function? Is there an example how to use it? [/EDIT]