Audio Programming Environment 0.5.0: C++ DSP directly in your DAW!

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

mystran wrote: Sat Feb 13, 2021 2:45 pm
camsr wrote: Sat Feb 13, 2021 9:11 am After experiencing problems using -O3 I decided to get into the fine details of compiler optimization settings. Ever since, I have used the -O2 as a basis for including more optimizations.
Both -O2 and -O3 only do "safe" optimization (and really they are typically about 99% the same), so if you experience problems with one and not the other, then it's probably a problem with your code and I'd be suggest getting to the bottom of it, because the numbered levels are moving targets so what -O3 does today, -O2 might do tomorrow.
You are probably right, some of the code I was testing was to be considered unsafe by the standards.

Post

camsr wrote: Sun Feb 14, 2021 12:27 am
mystran wrote: Sat Feb 13, 2021 2:45 pm
camsr wrote: Sat Feb 13, 2021 9:11 am After experiencing problems using -O3 I decided to get into the fine details of compiler optimization settings. Ever since, I have used the -O2 as a basis for including more optimizations.
Both -O2 and -O3 only do "safe" optimization (and really they are typically about 99% the same), so if you experience problems with one and not the other, then it's probably a problem with your code and I'd be suggest getting to the bottom of it, because the numbered levels are moving targets so what -O3 does today, -O2 might do tomorrow.
You are probably right, some of the code I was testing was to be considered unsafe by the standards.
Looking at this a bit more, it seems in GCC -O3 mostly enables more loop reorganization and Clang does roughly the same passes either way (maybe it changes heuristics, no idea). So most likely this is something like invalid type punning (ie. aliasing violation) or maybe a race condition (eg. missing fences in multi-threaded code; ICC is actually really good at breaking code that gets this wrong), where more aggressive optimizations are more likely to trigger issues, but the problems aren't strictly with any particular optimization as such and it really could fail at any time.

This is basically the flip-side of compilers getting better: if you rely on any undefined behaviour (intentionally or accidentally) there's a good chance that eventually it'll trip the optimizer to do something that breaks the intended behaviour.

Post

On the fast debate, I would want it to be at least a separate toggle (could just make another dropdown to recompile debug/release/ofast). Ofast by default does not sound optimal for research IMO
Jeff McClintock wrote: Fri Feb 12, 2021 4:27 am congrats! super cool idea.
Cheers!
mystran wrote: Fri Feb 12, 2021 1:14 pmThe big elephant in the room though is stability though: ... I wonder whether a bridging approach (ie. run the editor and all in DAW process, but offload the actual plugin into a sandbox) might ultimately be a better choice.
You are right, you can never make C++ safe and taking down a DAW really shouldn't be acceptable. Sandboxing is the better design, perhaps at least in a debug mode. It's certainly easier to coerce LLVM into emitting an assembly instead, which comes with a set of interesting features as well (debugging symbols come to mind).

It is something I wanted to investigate, but perhaps I never felt it necessary as I never encountered stability problems after adopting C++. But that's surely because I invented the thing and the API, so I subconsciously know what to avoid.
DJ Warmonger wrote: Fri Feb 12, 2021 1:17 pm Great, maybe finally I'll code my own plugin :)
Careful, you might end up being sucked into a whole new world :)
Music Engineer wrote: Sat Feb 13, 2021 5:21 pm thank you so much for this great tool ... it may become a cornerstone in my workflow for experimenting with ideas for realtime dsp algorithms ... i can have access to my whole dsp library.
Yes! Very cool if you can use it in your workflow, let me know how that works out.
Music Engineer wrote: Sat Feb 13, 2021 5:21 pmam i supposed to pull the values of the parameter inside the process call ... so the parameter objects actually contain arrays with per-sample values for each block?
Parameter objects store (previous, next, delta) values calculated by APE at the start of each processing block. If you evaluate them immediately, they will return "next" (ie. the most recent given value). If you evaluate them using the indexing operator they return previous + delta * n, so interpolating between the two given the sample offset n.
See
https://bitbucket.org/Mayae/ape/src/f00 ... h#lines-92
and
https://bitbucket.org/Mayae/ape/src/f00 ... #lines-284
Music Engineer wrote: Sat Feb 13, 2021 5:21 pm -are you handling denormals somewhere (supposedly with ftz/daz stuff*), so i don't have to worry about them in my scripts?
No, but I totally should! (also stop propagation of NaNs into the rest of the DAW). Do you think it makes more sense to have an API enabling fitzdazzing ( :) ) or is it something you want in the UI instead?
Music Engineer wrote: Sat Feb 13, 2021 5:21 pm-are you planning to add support for handling midi events?
Yes. I figure this prototyping pattern would extend really well to synths, if I provide a couple of scaffolding classes for voice instantiation / stealing etc. Before spending the energy I wanted to gauge the interest though.
Music Engineer wrote: Sat Feb 13, 2021 5:28 pmi wonder a bit, what the

Code: Select all

clear(outputs, shared);
call does. is this for clearing any channels that are not used, i.e. not already filled by the loop? if so, in what circumstance is that supposed to happen? and why are channels "shared"? with whom?
This is actually a leftover. An Effect is guaranteed to have the same input and output count (= shared count): http://jthorborg.com/content/ape/doc_05 ... ffect.html
but I only changed that recently so some of the samples are written to take clearing extraneous output channels into account (I've noted to clean them up).

If need be, I'll add formalized scaffolding for "down"/"upmixing" in the future.

Post

mystran wrote: Fri Feb 12, 2021 1:14 pmMy current framework can already do most of the things (except hot-reload, but I've been thinking about that) that APE does with roughly a similar amount of effort. Plotting signals is a bit more work, but we're talking about 2-3 lines per signal (+5 to add the scope to the editor). Text-editor I'd rather keep outside the DAW and I don't mind having to press ctrl-B to build. My collection of DSP utilities is pretty decent, it's all just an #include away and most importantly I already know how to use them (no manuals needed; this is really the big reason I'm wondering whether I should bother trying to use APE for simple prototyping and/or quick utilities). Not all of it might be as robust (it's really just what I need), but still.
I think we all invested similarly in a flexible setup like that, and regardless of this project that still has its place. I was hoping though that I managed to present the API idiomatically, so it would make sense for someone seasoned and no manual/learning curve would really be needed (I am too biased obviously to be able to judge that at this point though).

Anyway, where I could never quite get the workflow to was auditory A/B features. If there even is a buffer's worth of discontinuity, you cannot really hear the difference well since you involuntarily fixate on the gap and glitches. If you need to respin a build, restarting the debugging process and perhaps more importantly, restart the time progress of whatever you're analysing, I feel like too much context is lost for critical listening.

One of the reasons I spend some good time on the hot reloading, saving parameters and blending as you swap compiled instances. I was almost at the point to serialize the stateful data, so two plugin versions will simulate in a better correlated fashion as you blend them. But for now, this seemed to work OK enough.

Post

Mayae wrote: Sun Feb 14, 2021 4:50 pm
Music Engineer wrote: Sat Feb 13, 2021 5:21 pm -are you handling denormals somewhere (supposedly with ftz/daz stuff*), so i don't have to worry about them in my scripts?
No, but I totally should! (also stop propagation of NaNs into the rest of the DAW). Do you think it makes more sense to have an API enabling fitzdazzing ( :) ) or is it something you want in the UI instead?
I think it's important to have ftz/daz enabled in the plugin code itself. Maybe make a small template source and write a simple function to enable and disable it at the start and end of the processing.

Post

Mayae wrote: Sun Feb 14, 2021 5:13 pm Anyway, where I could never quite get the workflow to was auditory A/B features. If there even is a buffer's worth of discontinuity, you cannot really hear the difference well since you involuntarily fixate on the gap and glitches. If you need to respin a build, restarting the debugging process and perhaps more importantly, restart the time progress of whatever you're analysing, I feel like too much context is lost for critical listening.
Maybe the easier solution is to fade out the remaining buffer of the previous state, and fade in the new state. At least it will avoid hard discontinuity.

Post

Mayae wrote: Sun Feb 14, 2021 4:50 pm Parameter objects store (previous, next, delta) values calculated by APE at the start of each processing block. If you evaluate them immediately, they will return "next" (ie. the most recent given value). If you evaluate them using the indexing operator they return previous + delta * n, so interpolating between the two given the sample offset n.
aha - i see. from the client code side, it looks like accessing array elements but under the hood, it computes the linear interpolation on the fly to avoid wasting memory. good design choice!
Mayae wrote: Sun Feb 14, 2021 4:50 pm No, but I totally should! (also stop propagation of NaNs into the rest of the DAW). Do you think it makes more sense to have an API enabling fitzdazzing ( :) ) or is it something you want in the UI instead?
as far as i'm concerned, i wouldn't even need a way to turn it off but that's perhaps just me. if fitzdazzing denormals is optional, i think, i'd agree with camsr, that this is something that's more appropriate to decide in the code of the script because whether you want it or not will depend on what code you are running. i'd personally like to have it active by default to avoid the boilerplate of having to manually switch it on in every script but i have no idea, if wanting it active is the universally preferred choice of other users....but i find it hard to imagine situations where i or anyone else would not want to fitzdazz those pesky denormals - i mean, i could perhaps make some up some but then they would appear contrived...to me, at least - but maybe that's just my lack of imagination. dunno.

if you catch NaNs (and maybe infinities, too?), it would be nice to alert the user about that - maybe by some sort of flashing red light or a warning message on the console or something (although, i'd prefer the former because the console is not typically open in the typical situations when they may occurr - edit: oh - i just realized that i can keep the console open in a separate window! nice! ..in this case, what i just said might not apply)
Mayae wrote: Sun Feb 14, 2021 4:50 pm I figure this prototyping pattern would extend really well to synths, if I provide a couple of scaffolding classes for voice instantiation / stealing etc. Before spending the energy I wanted to gauge the interest though.
oh..well...voice management is really not was i was thinking of when asking about midi. this might be a second step on top of it but here i was really just thinking about a way to receive the events in the script. when playing around with ideas for dsp algorithms, it's usually totally fine when the algo is monophonic. i usually just want to set up some frequencies when receiving note-ons and maybe reset some osc-phases or zero some filter-buffers or trigger envelopes (or all of these). i think, it would be most convenient to have an additional callback that can be optionally overriden like

void handleMidiEvent(juce::MidiMessage msg) { /* blablabla */ }

or something. and when the app receives an event in a processing buffer, it would split it into a pre-event sub-buffer, call the script's process function for that, then call the handleEvent callback, and then call the process callback again with the rest of the buffer. ...and of course, if there are more events in one buffer at various instants, it would split the buffer into an appropriate number of sub-buffers (no copying is involved, of course - just adjusting some pointers). that's (more or less) how i do it in my framework for writing plugins and i think, that's much more convenient from the script writer's perspective than just passing a buffer of events along with the audio buffers to the process callback (like juce::AudioProcessor::processBlock does*), since then the script itself would have to do the splitting which would just mean a lot of boilerplate would have to written (or rather copy-and-pasted - which is a sin anyway).

(*) that's no criticism of juce's api decision btw. - i think, on that level, it's perfectly fine to do it like that. but on higher level that is more convenience oriented, having a separate midi callback is nicer imho
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Mayae wrote: Sun Feb 14, 2021 4:50 pm Parameter objects store (previous, next, delta) values calculated by APE at the start of each processing block. If you evaluate them immediately, they will return "next" (ie. the most recent given value). If you evaluate them using the indexing operator they return previous + delta * n, so interpolating between the two given the sample offset n.
I like to do this the other way: I store current (target) value, backwards delta and the time left on interpolation. Then value[n] is current+delta*max(0,timeLeft-n) which means you can clip the interpolation branchless middle of a block even if the interpolation time is varied adaptively. Either way, the basic principle is great. :)
Music Engineer wrote: Sat Feb 13, 2021 5:21 pm -are you handling denormals somewhere (supposedly with ftz/daz stuff*), so i don't have to worry about them in my scripts?
No, but I totally should! (also stop propagation of NaNs into the rest of the DAW). Do you think it makes more sense to have an API enabling fitzdazzing ( :) ) or is it something you want in the UI instead?
IMO checking/eliminating NaNs and infinities is best done by a wrapper (eg. clip to some large finite values and set NaNs to zero just before returning to DAW; optionally show a message in the GUI if anything "ugly" was detected). As for setting FTZ/DAZ the way I usually prefer is to have a little RAII object take care of that (ie. construct at the top of process and let it restore state when it goes out of scope). You could just provide such an object (for extra credit, make it optionally adjust rounding mode too) and let the user decide.

Post

Okay, so I'll probably add an option to flush denormals to zero in the config always if it seems feasible; and an RAII object in scripts for when it's important.

Pruning NaNs / infinities will happen on outputs automatically, with red flashing light.
camsr wrote: Sun Feb 14, 2021 9:50 pmMaybe the easier solution is to fade out the remaining buffer of the previous state, and fade in the new state. At least it will avoid hard discontinuity.
Indeed, that is what ape currently does.
Music Engineer wrote: Mon Feb 15, 2021 9:11 am oh..well...voice management is really not was i was thinking of when asking about midi. this might be a second step ... when playing around ... i usually just want to set up some frequencies when receiving note-ons
So I imagine the basic system would just pass the midi events along the buffers, and you could access them directly if you want. A voice controller would probably just be separate and eat the events.
Music Engineer wrote: Mon Feb 15, 2021 9:11 am i think, it would be most convenient to have an additional callback ... and when the app receives an event in a processing buffer, it would split it into a pre-event sub-buffer, call the script's process function for that

...

(*) that's no criticism of juce's api decision btw. - i think, on that level, it's perfectly fine to do it like that. but on higher level that is more convenience oriented, having a separate midi callback is nicer imho
Agree completely, fortunately we can have both. A scaffold / convenience derived processor could definitely be added for that.

Post

Mayae wrote: Tue Feb 23, 2021 10:50 pmSo I imagine the basic system would just pass the midi events along the buffers, and you could access them directly if you want. A voice controller would probably just be separate and eat the events.
I think a raw list of events should be available, as there are other potential ways to use MIDI (including note data) beyond standard voice allocation (which seems more like a candidate for a library utility).

What I do in my own framework these days is to have two std::vectors of midi events (one for input, one for output) as data members in my internal plugin base class. Before a process() call the plugin API wrapper replaces the contents of the input events vector with incoming events and clears the output event vector. After process call if output event vector is non-empty, those are sent as output. With in-place modifications and sufficient reserve() this is totally real-time safe too. :)

That's obviously not the only possible design, but I figured I'd share 'cos I really like the way one can write a plugin that doesn't care about MIDI and just ignore that it's there, yet as soon as you want the events you just look into the buffer with zero extra boiler-plate or scaffolding.

Personally I really do NOT like the idea of separation of MIDI handling from audio processing, because about 99% of the time I'm going to interleave the MIDI and audio processing anyway, so all the separate MIDI callback could ever do is store the events for the next audio processing round.

Post

mystran wrote: Wed Feb 24, 2021 1:31 am Personally I really do NOT like the idea of separation of MIDI handling from audio processing, because about 99% of the time I'm going to interleave the MIDI and audio processing anyway, so all the separate MIDI callback could ever do is store the events for the next audio processing round.
i think, you are misunderstanding my suggestion. the event handler callback would receive a single event at a time and is supposed to handle it immediately (by setting up frequencies, triggering envelopes, etc.) - not store it away for later use (like the processEvents callback we used to have in vst2). interleaving of audio- and midi-processing would be provided by the framework by means of interleaving the audio-processing and event-handling calls appropriately. the point is to get rid of the boilerplate code for the interleaving
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Music Engineer wrote: Wed Feb 24, 2021 6:59 pm interleaving of audio- and midi-processing would be provided by the framework by means of interleaving the audio-processing and event-handling calls appropriately.
You can't really do it on a framework level (well, you can.. and "certain" hosts do this, but you'll force a lot more audio buffer splitting than necessary), because you don't know what should and should not be interleaved. Just because you want to interleave the events with some audio processing doesn't necessarily mean you want to interleave all of the audio processing with the event logic.

edit:

Another less obvious case is partial interleaving of voices: say, you see a note-on, you steal a voice, so you process the first part of the buffer for the stolen voice, then look at the next event, possibly do another split for another voice whatever. When you're done with all the events, you can then process the remaining part of the buffer for all your voices. The key thing here is to realize that if a voice wasn't modified by any events, at this point it has not processed anything, so the whole buffer will be done in bulk. In some cases you might even do such splitting for some modules only, then process the rest in bulk, but the point I'm trying to make here really is that there is more than one way to handle this and your proposed solution (as far as I can see) mandates a one-size-fits-all solution that is often not ideal.

Post

mystran wrote: Thu Feb 25, 2021 2:33 am You can't really do it on a framework level (well, you can.. and "certain" hosts do this, but you'll force a lot more audio buffer splitting than necessary)
hosts do that also? so then, the time-stamps of all incoming events in a plugin are zero? i suppose, that's why "certain" hosts (FL?) hand "weird" buffersizes to plugins?

yes, i know that this forces the splitting of the buffers on the framework side - but i was thinking, that at some point, someone somewhere must do the splitting anyway, so it might as well be done once and for all on the framework side and the potential performance difference of doing so might just be the difference between a virtual function call vs a potentially inlined function call per event - a tradeoff, i would deem acceptable for the gained convenience, especially in a prototyping environment. but i understand that there may be different preferences, so it's a good thing that we can have both.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Music Engineer wrote: Fri Feb 26, 2021 5:15 pm
mystran wrote: Thu Feb 25, 2021 2:33 am You can't really do it on a framework level (well, you can.. and "certain" hosts do this, but you'll force a lot more audio buffer splitting than necessary)
hosts do that also? so then, the time-stamps of all incoming events in a plugin are zero? i suppose, that's why "certain" hosts (FL?) hand "weird" buffersizes to plugins?
Well.. I don't remember if FL splits on MIDI events, but it does split on automation.
yes, i know that this forces the splitting of the buffers on the framework side - but i was thinking, that at some point, someone somewhere must do the splitting anyway, so it might as well be done once and for all on the framework side and the potential performance difference of doing so might just be the difference between a virtual function call vs a potentially inlined function call per event
Well, it's a tradeoff between simplicity and performance, but I think the key here is that just because some of your processing needs to be split doesn't mean all of it needs to be split. In a complex synth with V voices, N events and internal multithreading, it might be the difference between V+N vs. V*(N+1) jobs submitted to a threadpool and that kind of overhead is far from trivial (even at N=1).

Post

Good job, integrating clang/LLVM is a very complex and annoying tasks with lots of platform dependent issues (I made one, and just gave up after too many macOS updates and LLM updates that kept on destroying ABIs and header paths)! Happy to see someone else making this work :)

Post Reply

Return to “DSP and Plugin Development”