DAW construction kit

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Modular concept would be digital audio working to emulate analog audio devices. We understand that in the digital audio realm, there is limitations to feedback processing which are based on the sample rate and available processing power. This is probably the main distinguishing factor between analog and digital, WRT recording quality. In the analog realm, there is also more opportunity for experimental routings, which may not be possible at real-time speeds whilst being emulated digitally.

Separating elements of the audio chain into multiple modules could lead to incompatibilities. Each module would need to support a feature that enables interaction with the other. And then something has to actually execute the code from both modules and ensure it's done compatibly. It doesn't seem like a monumental task, but it's challenging to even quantify the entire scope of the issue.

I envision plugins working as separate boxes like audio hardware does. If you envision working with audio hardware, you also understand the role of DAW as an endpoint in the audio chain to take recordings. When we go digitally, into the audio, in a DAW, we must abide by the user interface to make more audio things happens. A modular interface could present massive opportunity, but if the speeds are beyond par, it does not seem like a good choice for everyone.

The idea that effect plugins are loaded into a mixer track is completely antiquated IMO. Consider it like separate devices in the real world, connected by wires at the speed of light. Absolutely it can do 1-sample delay feedback loops, but that's because we want to do it that way. Even plugins that host other plugins implement this functionality.

Post

chk071 wrote: Fri May 07, 2021 12:54 pmOr why would you think you need special low latency driver for audio
Do I really need to tell you this? It's fairly obvious, I'd have thought. The thing is, though, you probably don't really need as low a latency as you are running. We run a 256 sample buffer when we play live. On stage, my bandmate is playing drums on his Octapad, triggering sounds in Battery or Kontakt via a 5m MIDI cable, so latency is really important. But that doesn't mean it has to be as low as it is possible to get it, it just needs to be low enough to make everything work in time. And we can do that without putting any strain on our system, even running our most complex songs, using our most CPU hungry plugins. And that was as true in 2002 as it is today.
or Linux needs a special low latency kernel for audio? Or why would you think you get crackles in the audio when one of your cores reaches its processing limit?
I only get crackles on old, low powered machines. It never, ever happens on my very reasonably priced Dell G7 or on any previous laptop I have used as my primary workhorse. "Never", as in "never-ever".

I'm sure if I didn't know what I was doing that I could make it crackle but I do know what I'm doing, so it's never an issue. Yes, I've been having problems trying to get things to run reliably on my Surface Pro but that's hardly a high spec machine and it's running a CPU that's four generations out of date. If I only use synths that were around when it was new, which is kind of what I've been trying to do with it, it works perfectly well. It's a machine I would never even contemplate installing any of my motion graphics applications on because they'd be useless.
antic604 wrote: Fri May 07, 2021 12:57 pmThat's why it's "barely ticking", but at the same time it's "difficult to get it running without clicks or pops".
That's two different computers. One is a second-hand 2015 Surface Pro with a 6th Gen mobile CPU, the other is a 2019 gaming laptop with a 9th Gen desktop CPU. I fully expected to have to compromise when moving things to the Surface, I'm just surprised at how much Studio One is forcing me to compromise. e.g. Songs that worked perfectly well on my Acer's 7th Gen Core i5 in Orion, crackle and pop like a bowl of rice crispies in Studio One on the Surface, with it's Core i7. I'll eventually get Orion on it, just to do comparisons, but if the problem isn't with Studio One, then there has to be a problem with the Surface itself, because I expected it to eat all these songs up but I can't use some of my favourite synths, like ARP Odyssey or bx_oberhausen at all. To be fair, I didn't own either when I had the Acer but the problems I'm having are much worse than I anticipated.
When you're rendering your song off-line it's possible that in one second it'll calculate 10s of audio and in next second it will only do 0.5s. It will average to 5x realtime speed at the end, but apparently your PC isn't optimised for realtime processing.
Of course it's not. It's optimised for 3D rendering performance because that's the hardest thing it has to do. I've been playing live with PCs since 2002 and I've never done the tiniest bit of optimisation because it's never been necessary. And it's still not necessary today.
But I'd still read up about buffer, dropout protection & realtime monitoring, despite your 40 years of experience. This might come as a surprise, but I' learning something new every day :)
I'll read up on drop-out protection because I have no idea what it is or how it works (it doesn't seem to make any difference) but I don't need realtime monitoring because I don't play anything through the computer. In fact, I route the computer through other things - both Ultranova and Analog Keys have built-in audio I/O so I don't need to carry a USB interface when I'm using either of them.
NOVAkILL : Asus RoG Flow Z13, Core i9, 16GB RAM, Win11 | EVO 16 | Studio One | bx_oberhausen, GR-8, JP6K, Union, Hexeract, Olga, TRK-01, SEM, BA-1, Thorn, Prestige, Spire, Legend-HZ, ANA-2, VG Iron 2 | Uno Pro, Rocket.

Post

Someone probably mentioned this already, but Apple basically already created a sort of "DAW construction kit". That is basically what CoreAudio, CoreMidi, AudioUnits and so on are. In a way, Logic is a skin over all the audio frameworks on MacOS/iPadOS/iOS.

I've done some audio development on Apple platforms and the tools really are incredible.

Post

mystran wrote: Fri May 07, 2021 10:46 pm However, I'm going to insist that routing signals (assuming that's what you want your mixer to do) and scheduling processing (which is what I provided a blueprint for) are fundamentally two different tasks and if you put them into the same module that's almost a text-book example of violation of separation of concerns.
But i never mentioned "routing signals" or "scheduling processing".
I'm not saying you are wrong, i'm saying... you develop the kernel of a daw engine (calling it mixer might me misleading) inside a shared library. What you do and how you do it, it's up to you. Free to modularize according to the wisest rules.

But back to the start.

I meant... find or develop some JACK enabled applications (if mr.jack doesn't mind). A sequencer, some synths, a mixer and so on. Run QjackCtl, connect them together and you have assembled "a kind of" daw. Should it work, you have a case where the parts seem to interoperate nicely. If you already think such a system is too limited, tell me so, i'd like to know other people's experiences with that. I wouldn't mind building "a kind of daw" in this way if it's feasible and it gives me the flexibility of customizing the things i'm interested in.

Post

Isn't this basically what NON already did? -> https://non.tuxfamily.org/

Just run those parts that you need at any moment. Your OS is the DAW.

Post

DRMR wrote: Sat May 08, 2021 9:26 pm Isn't this basically what NON already did? -> https://non.tuxfamily.org/

Just run those parts that you need at any moment. Your OS is the DAW.
I was almost sure something was already there. That's why the hint.

Post Reply

Return to “DSP and Plugin Development”