If an approximation function helps lower the cpu use, there is also precision of the result to consider.Jeff McClintock wrote: ↑Mon Mar 20, 2023 12:28 pmThe question is: Recalculating a SVF cutoff is basically a simple table-lookup,mystran wrote: ↑Mon Mar 20, 2023 1:01 am One possible engineering design might be to (1) smooth GUI/automation events using linear interpolation (2) generate modulation at some slightly reduced rate like 1/16 samplerate (3) interpolate the modulations to audio rate (4) interpolate to audio rate (eg. linear interpolation), perhaps smooth with a one-pole just a little bit (round those corners), then do filter coefficient computations at this rate (5) interpolate the coefficients to oversampled filter computation rate.
is it possible that these 6 steps of computation are less efficient than just computing the cutoff at audio-rate?
(to be fair, you would still need to do step 1)
General method for smooth parameter changes
-
camsr
- KVRAF
- 7294 posts since 17 Feb, 2005
-
juha_p
- KVRian
- 802 posts since 21 Feb, 2006 from FI
Could using separate thread for to calculate new coefficients take enough time to smooth the parameter change if you leak enough changed parameter values until the thread is finished (after the thread is finished you read and send new parameter values for new coefficient calculation thread). Of course additional delay could be used if thread finishes too fast.
I used this type of technique in file updating process to get rid of terrible issues in a command file based EQ system (EqualizerAPO) which uses system directory/file change flag to launch reading of the EQ configuration file(s) in and, as you need to update/save the configuration file to get the flag turned ON, so basically configuration file needs to be saved after each parameter change and to get real-time feel for parameter value change, file saving/reading is then done tens, hundreds or even thousands times while you change the parameter value to its final value ... ).
I used this type of technique in file updating process to get rid of terrible issues in a command file based EQ system (EqualizerAPO) which uses system directory/file change flag to launch reading of the EQ configuration file(s) in and, as you need to update/save the configuration file to get the flag turned ON, so basically configuration file needs to be saved after each parameter change and to get real-time feel for parameter value change, file saving/reading is then done tens, hundreds or even thousands times while you change the parameter value to its final value ... ).
Last edited by juha_p on Fri Mar 31, 2023 12:49 pm, edited 1 time in total.
-
Jeff McClintock
- KVRist
- 405 posts since 30 Jan, 2005 from New Zealand
Yes, some companies do it this way. The tradeoff is - since you can't be sure exactly how long it takes until the new coefs will be ready, you lose sample-accurate automation.
-
mystran
- KVRAF
- 7410 posts since 12 Feb, 2006 from Helsinki, Finland
Actually I wouldn't do a table lookup. The tuning of trapezoidal filters is a matter of tan(pi*f/fs) and there are efficient approximations for tan() ... like the one from 2DaT that's basically perfect and when done in SIMD takes like less than 2 cycles (or whatever; point is that it's not worth spending cache on a LUT) average per sample..Jeff McClintock wrote: ↑Mon Mar 20, 2023 12:28 pm The question is: Recalculating a SVF cutoff is basically a simple table-lookup,
is it possible that these 6 steps of computation are less efficient than just computing the cutoff at audio-rate?
...but if you need to do funkier design, for example you're doing some MZTi fit or perhaps even running least-squares for a FIR corrector then doing it per-sample is a no-go and interpolating coefficients becomes a thing. Would you do this in a synth? Probably not, but you could reasonably do it in an EQ and still have the thing automate properly.
Seeking asylum in any country willing to acknowledge my right to exist.
-
aciddose
- KVRAF
- 12546 posts since 7 Dec, 2004
I only use a table lookup because it eliminates clamp(tan(clamp(exp(x)))) and clamp(sin(clamp(exp(x)))) and is integer-integer. If it were just tan or sin I could easily approximate it. Even if it were float-float it may be more efficient to compute directly in cases where not so many operations were lumped together. I use the same (similar) technique for transposition where the octave (N^2) is extracted and a very short table is used for the fractional component. This does use some cache, but the table is a global and when processing 100s of voices it should remain cached. The tables are also very short ~256 or ~64 as they only need to be "precise" on exact samples and linear interpolation is sufficient between those points. For example you might use 12*N to get precision to 1/N semitones.
It's of course very important to avoid premature optimization in any program. Always set up an unoptimized test case where the full program is compiled using the switchable templated subroutines. Compare different optimization strategies directly in normal application, not under fixed test conditions which may never occur in real application.
It's of course very important to avoid premature optimization in any program. Always set up an unoptimized test case where the full program is compiled using the switchable templated subroutines. Compare different optimization strategies directly in normal application, not under fixed test conditions which may never occur in real application.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
-
rafa1981
- KVRian
- 868 posts since 4 Jan, 2007
The thing is that spawning threads might work if only one or a few plugins are doing it. It might happen that the maximum number of instances before cracks/clicks appear is less while at the same time less CPU consumption is shown. The CLAP thread pool results support/show this.
Same to a lesser extent with lookup tables.
As I see it for audio the important thing is not having low CPU usage, but as few causes of performance/timing spikes as practically possible.
Then of course what gains happier customers I don't know...
Same to a lesser extent with lookup tables.
As I see it for audio the important thing is not having low CPU usage, but as few causes of performance/timing spikes as practically possible.
Then of course what gains happier customers I don't know...