What are the most important parts of C++ for coding plug-ins?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Any plug-ins that fall under "Awesome modern C++"?

https://github.com/rigtorp/awesome-modern-cpp

Post

"My tip is very simple: Learn it by doing."

This is the way

Post

I got ChatGPT to code a 2 oscillator 1 filter, 1 LFO, & Chorus VST synth in C++. It didn't put any values in for the internals, but it was a great bare bones structure to learn from.

Post

osiris wrote: Sun Feb 05, 2023 4:48 pm I got ChatGPT to code a 2 oscillator 1 filter, 1 LFO, & Chorus VST synth in C++. It didn't put any values in for the internals, but it was a great bare bones structure to learn from.
I'd be interested to know what you told chatgpt to get this

also what do you mean it didnt put any values in for the internals? like an lfo with no frequency value?

Post

Be aware of ‘realtime safe’ code in realtime threads. Look up lock-free programming.

Avoid locks, malloc, logging, anything of that sort in a realtime thread. Use atomics wherever appropriate. The principles are the same across the programming languages, the execution and syntax is different.

Cpp specific: lookup RAII, be aware of rule of zero, rule of seven. Get comfortable with templates and debugging using gdb, run tools like asan, tsan. Stick with modern cpp (avoid c++11). Go for c++17 at the oldest.

Post

keyman_sam wrote: Thu Sep 21, 2023 3:37 am Avoid locks, malloc, logging, anything of that sort in a realtime thread. Use atomics wherever appropriate. The principles are the same across the programming languages, the execution and syntax is different.
Locks are fine as long as you never wait on a lock that might be held by a lower priority thread. Two audio threads using locks to synchronize each other is fine if there's no sensible wait-free alternative. An audio thread waiting for a lock that could be held by a GUI thread on the other hand is a priority inversion and not realtime safe (though sometimes you can get away with this using trywait() primitives in audio thread, if you don't care if it always succeeds and just move along with your audio processing if it doesn't).

Post

j wazza wrote: Wed Sep 20, 2023 10:22 pm
osiris wrote: Sun Feb 05, 2023 4:48 pm I got ChatGPT to code a 2 oscillator 1 filter, 1 LFO, & Chorus VST synth in C++. It didn't put any values in for the internals, but it was a great bare bones structure to learn from.
I'd be interested to know what you told chatgpt to get this

also what do you mean it didnt put any values in for the internals? like an lfo with no frequency value?
I think what it thinks it's trying to do is teach me to code. It says it wants Steinberg SDK, which I don't have so I don't know if it would make any difference. I just asked it to code a basic one oscillator multi wave synth. Midi in, pitch and Unison. The envelopes for amp, filter and a LFO. It did the basic frame. The envelopes were all done but no values for Attack Decay, etc.

Post

mystran wrote: Thu Sep 21, 2023 10:11 am Locks are fine as long as you never wait on a lock that might be held by a lower priority thread. Two audio threads using locks to synchronize each other is fine if there's no sensible wait-free alternative.
This is interesting. I've watched several videos on realtime-safe coding and none of them mentioned this. Do you have an example of why you would want multiple audio threads? Also, how do you ensure they're the same priority?

Post

rou58 wrote: Fri Apr 26, 2024 2:44 pm
mystran wrote: Thu Sep 21, 2023 10:11 am Locks are fine as long as you never wait on a lock that might be held by a lower priority thread. Two audio threads using locks to synchronize each other is fine if there's no sensible wait-free alternative.
This is interesting. I've watched several videos on realtime-safe coding and none of them mentioned this. Do you have an example of why you would want multiple audio threads? Also, how do you ensure they're the same priority?
Watching videos is not a very good way to learn anything except absolute basics... or perhaps get an overview of what to actually study... but even then it's like the most inefficient possible way to spend your time.

For the most part, the question of what is or is not "real-time safe" becomes completely obvious when one truly understands what real-time actually means in the context of computing: a real-time program is one where missing a dead-line is a failure. Sometimes the term "hard RT" is used to describe situations where missing a dead-line is equivalent to a total system failure (think say some safety critical system) where as something might be called "soft RT" if missing a dead-line is bad, but not end of the world.. but it's a spectrum, and audio is actually "pretty hard RT" in the sense that while missing a deadline won't kill anyone, there will still be an obvious "glitch" (where as say a video might be able to get away with a few framedrops).. but still "soft" in the sense that we obviously don't usually prove that our code is guaranteed to meet a deadline on a certain CPU (though you could perhaps do this if you were making a hardware DSP product).

The fundamentally important part to understand is that "real-time" as a concept is concerned with dead-lines and dead-lines only. This means that when you're looking at realtime code, you need to evaluate performance in a different way from how you would evaluate it for general purpose code: in "normal" code, you are typically concerned about average efficiency, but in real-time code you must meet every dead-line separately, so you are concerned about worst-case efficiency first, and average efficiency only second. This is fundamental: if you are not thinking about the worst-case efficiency, you are not doing real-time programming.

Now.. if we have a thread that's trying to work towards a real-time deadline and then we have another thread (say a UI thread) that's not.. and the UI thread is holding a lock and we try to grab it from the audio thread, then what we have is what is known as a "priority inversion" where now the realtime thread is waiting for a non-realtime task to make progress so that it can free the lock. In a strict real-time system we can actually construct scenarios where this sort of situation leads to a dead-lock, but even if that's not really a concern on desktop systems, we still risk missing a deadline due to something that has nothing to do with any real-time processing.

That gives us the two rules that allow us to figure out what is or is not realtime safe: does the operation have an acceptable worst-case performance and does it guarantee that we'll never need to wait for a non-realtime thread? Once you understand these two ideas, you don't need to watch any videos about realtime safety anymore.

So finally why multiple threads? Well, we have a bunch of stuff to process and we need to get it processed before a dead-line expires. Modern CPUs have a bunch of cores, so if we can put work on all of these then we can expect to be able to process more stuff without missing dead-lines compared to trying to compute it all on a single core, right? Now, basically all modern DAWs try to process multiple plugins in parallel, so for most plugins you don't really need to do multi-threading directly, but sometimes when you have something like a particularly expensive synth or something, it could bottleneck the whole processing graph for long enough that you'll miss the dead-line.. unless we run the synth itself on multiple cores (most commonly different voices on different cores). This then frees up dependencies in the graph, allowing other plugins to be processed earlier, and hopefully allows the whole thing to meet the dead-line. There is a cost, so it only makes sense if a single plugin takes long enough to compute, but .. this happens.

So how do you make sure your audio threads are on the same priority? You put at the system designated audio priority and hope that the DAW (and all other plugins) are doing the same thing like they should.

Post Reply

Return to “DSP and Plugin Development”