DAW construction kit

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

antic604 wrote: Fri May 07, 2021 7:17 amDAWs are very demanding realtime applications.
No, they aren't. I have a grunty PC so I can render 3d animations and After Effects compositions. Compared to those pursuits, audio is a doddle.
Regarding S1, make sure to read up on the relationship between audio buffer, dropout protection and realtime monitoring.
I'll have a look in the manual but I've been doing this for 40 years, I think I understand how it works by now, and the reality is that I can't run Studio One with anything like the buffer I used to use for Orion. To make our songs work on my Surface Pro, I've had to strip out effects and replace a lot of synths and, even then, I think we'll probably end up going back to my big laptop, just so we can get the sound we want. Or maybe we'll just go back to using Orion live, I dunno at the moment but the last couple of weeks have been a big eye opener/disappointment for me, in regards to the limitations of S1. In an ideal world I'd be able to use the 32 bit version of Orion but we've gone beyond that now so it is what it is.
NOVAkILL : Asus RoG Flow Z13, Core i9, 16GB RAM, Win11 | EVO 16 | Studio One | bx_oberhausen, GR-8, JP6K, Union, Hexeract, Olga, TRK-01, SEM, BA-1, Thorn, Prestige, Spire, Legend-HZ, ANA-2, VG Iron 2 | Uno Pro, Rocket.

Post

BONES wrote: Fri May 07, 2021 8:37 am
antic604 wrote: Fri May 07, 2021 7:17 amDAWs are very demanding realtime applications.
No, they aren't. I have a grunty PC so I can render 3d animations and After Effects compositions. Compared to those pursuits, audio is a doddle.
Did you just compare realtime process that needs to calculate in an uninterrupted manner state of dozen(s) of tracks with doznen(s) of plugins on them 48k times a second, to a process that sometimes takes half a day to render few seconds of footage?!

It's like saying Ethiopian marathon runners should dominate 100m sprint, because that sprint it's just 1/420 of the marathon...

:dog: :lol:
Music tech enthusiast
DAW, VST & hardware hoarder
My "music": https://soundcloud.com/antic604

Post

mystran wrote: Fri May 07, 2021 1:52 am
That said.. I'm not sure how I feel about the whole "construction kit" idea, because I feel like much of the complexity in a DAW might come from getting all the pieces to interoperate nicely.
JACK can be considered a barebone daw, the clients being the components. Emulating the idea with JACK might show how well the pieces can interoperate.

Post

It's what we buy our computers for and I max out the performance of my laptop on graphics work way more often than I do with music. Usually when I render a song, it renders at 2.5 - 5 times realtime speed, so when it's just doing realtime, it's barely ticking over. OTOH, when I am caching a preview in After Effects, it usually runs at around one-fifth of realtime so every time you want to see what you're working on, you are using the full power of the CPU and GPU.

When I'm doing 3D, it's usually with Evee, the realtime renderer in Blender. It says "realtime" but it really takes a second or two per frame, which means I can render 30 seconds of close to photorealistic HD in a couple of hours.
NOVAkILL : Asus RoG Flow Z13, Core i9, 16GB RAM, Win11 | EVO 16 | Studio One | bx_oberhausen, GR-8, JP6K, Union, Hexeract, Olga, TRK-01, SEM, BA-1, Thorn, Prestige, Spire, Legend-HZ, ANA-2, VG Iron 2 | Uno Pro, Rocket.

Post

BONES wrote: Fri May 07, 2021 8:37 am
antic604 wrote: Fri May 07, 2021 7:17 amDAWs are very demanding realtime applications.
No, they aren't.
:lol:

Sorry, BONES, but... maybe do at least some of your homework sometimes...

Or why would you think you need special low latency driver for audio, or Linux needs a special low latency kernel for audio? Or why would you think you get crackles in the audio when one of your cores reaches its processing limit?

And why do I have to point out these completely obvious things?
Last edited by chk071 on Fri May 07, 2021 1:00 pm, edited 1 time in total.

Post

BONES wrote: Fri May 07, 2021 12:29 pm...Usually when I render a song, it renders at 2.5 - 5 times realtime speed, so when it's just doing realtime, it's barely ticking over...
BONES wrote: Fri May 07, 2021 3:15 am...I am amazed at how difficult it is to get Studio One running reliably (no clicks or pops) on hardware that Orion handles without any problems.
Now you're confusing raw power with realtime processing :shrug:

That's why it's "barely ticking", but at the same time it's "difficult to get it running without clicks or pops". When you're rendering your song off-line it's possible that in one second it'll calculate 10s of audio and in next second it will only do 0.5s. It will average to 5x realtime speed at the end, but apparently your PC isn't optimised for realtime processing.

But I'd still read up about buffer, dropout protection & realtime monitoring, despite your 40 years of experience. This might come as a surprise, but I' learning something new every day :)
Music tech enthusiast
DAW, VST & hardware hoarder
My "music": https://soundcloud.com/antic604

Post

bitwise wrote: Fri May 07, 2021 8:55 am
mystran wrote: Fri May 07, 2021 1:52 am
That said.. I'm not sure how I feel about the whole "construction kit" idea, because I feel like much of the complexity in a DAW might come from getting all the pieces to interoperate nicely.
JACK can be considered a barebone daw, the clients being the components. Emulating the idea with JACK might show how well the pieces can interoperate.
JACK solves the problem of routing audio and midi, but if you're building a DAW and working in-process that part is not very complicated.

The basic idea is you start by building a DAG (ie. every DSP unit is a node and every connection is an edge), which gives you a partial order (or a lattice if you add top/bottom nodes, which can be handy for sync). To prevent cycles (or break them down with loop delays), you can do a simple DFS in reverse direction to see if you can already reach the node (ie. whether the new edge would form a cycle). From the partial order you can choose a topological order, which you can then use to compute things like latencies (eg. for PDC delays; if you allow critical edges, then break those down with a dummy node whenever you need to insert a delay on the edge).

You can also use your chosen topological order for processing, but for multi-threading you only really need to respect the partial order and the easiest way to do this is to use a threadpool, then keep a counter per node of how many incoming edges of a node have not been processed for the current block yet and then release the node into the pool when that number hits 0. When a node finishes processing, it just decrements the counters of all downstream nodes and releases them to the pool as necessary.

There's a bunch of performance tuning that can be done (eg. priority heuristics, pipelining, skipping the pool for nodes you can just as well process directly), but even the "naive" approach works surprisingly well in practice and will happily process whatever arbitrary DAG you throw at it.

Post

mystran wrote: Fri May 07, 2021 3:31 pm
bitwise wrote: Fri May 07, 2021 8:55 am
mystran wrote: Fri May 07, 2021 1:52 am
That said.. I'm not sure how I feel about the whole "construction kit" idea, because I feel like much of the complexity in a DAW might come from getting all the pieces to interoperate nicely.
JACK can be considered a barebone daw, the clients being the components. Emulating the idea with JACK might show how well the pieces can interoperate.
JACK solves the problem of routing audio and midi, but if you're building a DAW and working in-process that part is not very complicated.

The basic idea is you start by building a DAG (ie. every DSP unit is a node and every connection is an edge), which gives you a partial order (or a lattice if you add top/bottom nodes, which can be handy for sync). To prevent cycles (or break them down with loop delays), you can do a simple DFS in reverse direction to see if you can already reach the node (ie. whether the new edge would form a cycle). From the partial order you can choose a topological order, which you can then use to compute things like latencies (eg. for PDC delays; if you allow critical edges, then break those down with a dummy node whenever you need to insert a delay on the edge).

You can also use your chosen topological order for processing, but for multi-threading you only really need to respect the partial order and the easiest way to do this is to use a threadpool, then keep a counter per node of how many incoming edges of a node have not been processed for the current block yet and then release the node into the pool when that number hits 0. When a node finishes processing, it just decrements the counters of all downstream nodes and releases them to the pool as necessary.

There's a bunch of performance tuning that can be done (eg. priority heuristics, pipelining, skipping the pool for nodes you can just as well process directly), but even the "naive" approach works surprisingly well in practice and will happily process whatever arbitrary DAG you throw at it.
All the processing should take place inside the mixer. Consider the sequencer as an event source. It fills the mixer audio buffers and the plugins input buffers for audio clips. For midi clips, it drives the plugins with midi data. Plugin format handlers are just factories, once a plugin is created it's plugged into the mixer DAG. So only the mixer component developers take care of the DAG. A separation of concerns.

Post

chk071 wrote: Fri May 07, 2021 12:54 pm Linux needs a special low latency kernel for audio?
This is not true. Very low latency settings are often better supported by using the RT kernel, but Linux does not need a special kernel for audio.

Post

bitwise wrote: Fri May 07, 2021 6:43 pm All the processing should take place inside the mixer. Consider the sequencer as an event source. It fills the mixer audio buffers and the plugins input buffers for audio clips. For midi clips, it drives the plugins with midi data. Plugin format handlers are just factories, once a plugin is created it's plugged into the mixer DAG. So only the mixer component developers take care of the DAG. A separation of concerns.
Somewhere else you said you'd written a DAW. Nothing you've written in either of the two threads you've created on this topic really supports the idea that you ever finished a serious DAW. The actual complexities of doing this sort of thing "right" go so far beyond anything you've even hinted at here that it's almost a joke.

Rack and others present REALLY amazing modular environments. If you want to play with a "DAW blocks" world, I would recommend using such a system. If you actually want to write a DAW, that approach isn't going to get you there. A DAW is not a bunch of glued/connected components.

Post

bitwise wrote: Fri May 07, 2021 6:43 pm All the processing should take place inside the mixer. [...] A separation of concerns.
Isn't this the exact opposite of "separation of concerns"?

Post

mr.ardour wrote: Fri May 07, 2021 7:48 pm Somewhere else you said you'd written a DAW. Nothing you've written in either of the two threads you've created on this topic really supports the idea that you ever finished a serious DAW. The actual complexities of doing this sort of thing "right" go so far beyond anything you've even hinted at here that it's almost a joke.
I consider it a prototype. Where did i say it was full fledged?
I said that i saw the time passing unmercifully and that was not
what i had in mind. I didn't want to spend all the time fine tuning
and polishing. I just wanted to implement my way of organizing, storing,
easily recreating the song and not losing new ideas that might pop up.

Yet, it supported audio clips and vst2.4 plugins. It had a midi editor,
could visually layout the song structure and mix the tracks applying
effects. But I wanted to implement my way of organizing, storing, easily recreating
a song and not losing new ideas that might pop up unexpectedly. Beacuse of this
i lost interest in daws.

Who knows? Perhaps it's beacuse i haven't touched a daw for years that
now, as you say, i don't know what i'm saying, i might have forgotten
what it is all about.


mr.ardour wrote: Fri May 07, 2021 7:48 pm Rack and others present REALLY amazing modular environments. If you want to play with a "DAW blocks" world, I would recommend using such a system. If you actually want to write a DAW, that approach isn't going to get you there. A DAW is not a bunch of glued/connected components.

Excusatio non petita...
Who said they are not amazing? Good luck to you all.

Yet, i don't want to play with daw blocks, i want to...
implement my way of organizing, storing, easily recreating
a song and not losing new ideas that might pop up unexpectedly.

Post

mystran wrote: Fri May 07, 2021 9:04 pm
bitwise wrote: Fri May 07, 2021 6:43 pm All the processing should take place inside the mixer. [...] A separation of concerns.
Isn't this the exact opposite of "separation of concerns"?
It seemed to me that your concern was that scattering the DAG nodes over multiple components would make it difficult to handle the DAG correctly and i say that the DAG concern only belongs to the mixer. The other components have other tasks.

Post

bitwise wrote: Fri May 07, 2021 10:10 pm It seemed to me that your concern was that scattering the DAG nodes over multiple components would make it difficult to handle the DAG correctly and i say that the DAG concern only belongs to the mixer. The other components have other tasks.
You must have completely misunderstood what I said then.

My entirely point was that processing an arbitrary audio graph is relatively simple thing to do if you're working inside a single process. Since I figured this might not be totally obvious to all potential readers and I was kinda bored, I figured I might just as well provide a basic blueprint.

However, I'm going to insist that routing signals (assuming that's what you want your mixer to do) and scheduling processing (which is what I provided a blueprint for) are fundamentally two different tasks and if you put them into the same module that's almost a text-book example of violation of separation of concerns.

Post

F.y.i, it's a design principle
https://en.m.wikipedia.org/wiki/Separation_of_concerns
Without that you get the structure of a bowl of mud. Not good for anything supposedly "modular" and pluggable.
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post Reply

Return to “DSP and Plugin Development”