Initializing plugins for "off line" vs "live" usage?
-
- KVRian
- Topic Starter
- 626 posts since 30 Aug, 2012
I'm confused on the approach to initializing plugins.
When used in a "live" application, like playback from a DAW, audio data (or zeroes) is usually sent to the plugin in a continuous stream even when playback is stopped - as if it was a piece of real electronic gear. The plugin has time to "settle out" before audio starts.
But when used "off line" (rendering) there is no continuous "idling" stream of data - the plugin needs to start producing valid outputs from the very first sample it receives.
So, when initializing things like FIR filters, FFT buffers, etc, how is this difference between off-line and live usage typically handled? If you simply initialize everything to zeroes you could get garbage at the beginning of the file when used in an "off line" situation. If you try to solve that problem by "reading ahead" to fill the buffers with real data you will incur latency in live use. It seems, in many cases, that a plugin requires two different modes of operation - live and off-line. How is this typically handled?
When used in a "live" application, like playback from a DAW, audio data (or zeroes) is usually sent to the plugin in a continuous stream even when playback is stopped - as if it was a piece of real electronic gear. The plugin has time to "settle out" before audio starts.
But when used "off line" (rendering) there is no continuous "idling" stream of data - the plugin needs to start producing valid outputs from the very first sample it receives.
So, when initializing things like FIR filters, FFT buffers, etc, how is this difference between off-line and live usage typically handled? If you simply initialize everything to zeroes you could get garbage at the beginning of the file when used in an "off line" situation. If you try to solve that problem by "reading ahead" to fill the buffers with real data you will incur latency in live use. It seems, in many cases, that a plugin requires two different modes of operation - live and off-line. How is this typically handled?
-
- KVRist
- 379 posts since 21 Nov, 2005
I can't answer from a VST perspective, as I'm not a VST developer - I've just started hanging around here for some reason! But I think the simple answer is that when used "offline", the actual sequence of values being passed in (representing PCM audio data) need be no different to the sequence of values that the plugin receives when it's being used "live"; its output, therefore, can also also exactly the same.
In both situations, the plugin is reading in chunks of that sequence as fast as it can, and processing them as fast as it can; it's just that in the "live" situation, the environment (e.g. VST host) will be feeding those chunks in at an interval that corresponds with the passing of real time, while if rendering offline, the environment will simply pass those chunks through as fast as possible. In between the processing of the chunks, the processor is doing nothing. So in both cases, the input is the same; the output is the same; the only difference is the length of the occasional breaks taken...
In both situations, the plugin is reading in chunks of that sequence as fast as it can, and processing them as fast as it can; it's just that in the "live" situation, the environment (e.g. VST host) will be feeding those chunks in at an interval that corresponds with the passing of real time, while if rendering offline, the environment will simply pass those chunks through as fast as possible. In between the processing of the chunks, the processor is doing nothing. So in both cases, the input is the same; the output is the same; the only difference is the length of the occasional breaks taken...
-
- KVRAF
- 2550 posts since 13 Mar, 2004
You can detect in your plugin if the host is in online (live) or offline rendering stage and then do stuff accordingly.
https://forum.cockos.com/showthread.php?t=46599
https://forum.cockos.com/showpost.php?p ... ostcount=5
https://forum.cockos.com/showthread.php?t=46599
https://forum.cockos.com/showpost.php?p ... ostcount=5
-
- KVRian
- Topic Starter
- 626 posts since 30 Aug, 2012
That's correct. The difficulty comes in when working with things like linear phase filters that require large amounts of future values and past values. Those "chunks" being processed, in that case, can have elements that occur where time does not (like at the beginning or end of an audio clip). Question I had in this thread was how to deal with that. In most cases that "undefined time" is filled with zeroes and, AFAIK, that's about all you can do.Topiness wrote: ↑Wed Jan 09, 2019 10:43 pm I can't answer from a VST perspective, as I'm not a VST developer - I've just started hanging around here for some reason! But I think the simple answer is that when used "offline", the actual sequence of values being passed in (representing PCM audio data) need be no different to the sequence of values that the plugin receives when it's being used "live"; its output, therefore, can also also exactly the same.
In both situations, the plugin is reading in chunks of that sequence as fast as it can, and processing them as fast as it can; it's just that in the "live" situation, the environment (e.g. VST host) will be feeding those chunks in at an interval that corresponds with the passing of real time, while if rendering offline, the environment will simply pass those chunks through as fast as possible. In between the processing of the chunks, the processor is doing nothing. So in both cases, the input is the same; the output is the same; the only difference is the length of the occasional breaks taken...
So, my point is that "time" is the difference between live processing vs. off-line processing. In live processing time is infinite in both directions (past and future). In off-line processing (i.e., of a file) time has a defined and limited range.
-
- KVRist
- 379 posts since 21 Nov, 2005
My experience with working with filters is limited (biquads, some simple FIRs and delays) but I'm confused as to why a filter would need past audio values? In every filter I've used, you set up the behaviour by setting various coefficicents; the internal state of the filter is then initialised as if it's only ever seen zero input.
I have typically needed to input some extra zeroes at the end - 'future values', you could say - to 'push out' the tail of the filter response. In the case of an IIR this will never really end, so you have to just decide how long to go for....
.... but to me that wouldn't be undefined time, so much as just a defined period of silence...?
I still don't see there's really a difference, or a concept of undefined time when it comes to a digital algorithm (which typically doesn't ever truly work in real time - even when working 'live' - but in discrete time on a sequence of values).
-
- KVRian
- Topic Starter
- 626 posts since 30 Aug, 2012
Nearly EVERY filter uses past values. In fact, IIR filters ONLY use past outputs and sample values (plus the one current value) and, therefore, behave much like an actual electronic filter.
A linear phase FIR filter may require HUNDREDS, or even thousands, of past samples plus an equal number of future samples. When a filter like this starts up at the beginning of an audio file, half of the FIR array (the "past" half) is in a time that doesn't exist because it's before the file started. Same thing happens at the end of the file when there are no samples that can be "look ahead" to fill the future half of the FIR array.
-
- KVRist
- 379 posts since 21 Nov, 2005
Sure, but what I thought you were talking about was the practical need to actually send those conceptual past values in to the filter. As I mentioned, with every filter I've used, it's trivial to initialise the internal state of the filter as if it's only ever seen zero input.
A simple IIR example would be this:
https://raw.githubusercontent.com/naudi ... dFilter.cs
When you set it up (e.g. using SetLowPassFilter() ), that will set up the coefficients, specifying the behaviour. But AFAICS the state is (trivially) set up as if the filter had only ever seen infinite zeroes as past input.
-
- KVRian
- Topic Starter
- 626 posts since 30 Aug, 2012
Yes, filling the "past" with zeros is all you can do. The problem I envisioned with that is that the first several samples the plugin processes will not produce the correct output because it's only based on SOME audio data (the rest being zeros).
But, upon further thought, that's all you CAN do. I considered "reading ahead" and then delaying the filter output - but that's not right either since the "read ahead" data would also be in the wrong place with respect to the current sample.
You're right. Fill with zeros.