Most efficient (no. of clock cycles) way to implement a DAW timeline with MIDI and all?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

I've been wondering. I tried to search for source code for the host sequencer part, and while I can find some that work, I was puzzled by seeing some implementations seeming quite slow (even if they're in C) while also reading some papers that discussed e.g. how from-disk reading is done efficiently for samples.

Now, intuitively the sequencer is quite complex thing, since it encompasses:
  • MIDI events and processes on them
  • playing back audio samples and processes on audio samples
What e.g. confuses me is that whether the data and the graphical representation are separated. Intuitively it sounds easier to process just data (e.g. MIDI start time) and not graphical conceptions (some (x,y)), however there must still be a translation between these to display to the user. Then I begin to see different ways to modularize this. I am not at all sure which would take least amounts of instructions, because that could also depend on how the software is used.

How can one study such thing?
Last edited by soundmodel on Wed Mar 06, 2024 10:14 am, edited 1 time in total.

Post

soundmodel wrote: Wed Mar 06, 2024 8:30 am How can one study such thing?
Start building a thing. Run into problems. Think hard about it. Look for a framework that might fit the bill. Start over building a thing with the framework. Run into problems again. Have a better and much simpler idea on how to do this now that you've gained some experience. Start over again with your new concept of the simplest way to build a thing. Run into roadblocks again and discover the limitations of your elegant simplistic approach. Start over again, this time with a much more complex but flexible and rigorous concept, which is impossible to explain to anyone. Get annoyed with the complex structure you've built around yourself. Swing back and forth between complex and simplistic concepts for ~20 years. Then finally build something very messy out of bits and pieces from the last 20 years that at least works and somehow resembles a product.

Ideally you'd vary this approach by building something that works as early as possible, even while you're still thinking that there must be a better/easier/more flexible way to do it.

On a serious note: separating model and view is always a good idea. In the case of audio products it's a good idea to look at the audio/DSP process as a kind of view as well, which merely presents the underlying model by playing it back.

The biggest challenge is to decide with each and every feature you want to implement, what goes where, and how it all fits together in the bigger concepts. That, and naming things clearly and rigorously.

If your goal is to build something complex like a DAW with "least amount of instructions", you're entering the world of pain: the less code you're ready to write to express something complex, the more rigid, inflexible and tightly coupled it has to be.

Post

hugoderwolf wrote: Wed Mar 06, 2024 9:23 am
soundmodel wrote: Wed Mar 06, 2024 8:30 am How can one study such thing?
If your goal is to build something complex like a DAW with "least amount of instructions", you're entering the world of pain: the less code you're ready to write to express something complex, the more rigid, inflexible and tightly coupled it has to be.
But I think we want the audio I/O to be as fast as possible. It's just that because of the GUI with this there's also I/O from the data to the UI. And since this is a low-latency app, then I am not sure if some Java Swing design pattern is relevant. In Swing etc. we'd say that the GUI is almost never a cause for a bottleneck.

My intuition says the same as you. That the DAW should in principle operate exactly the same, even if the GUI was removed.

However, due to the I/O aspects, I started to think whether tightly coupled GUI and data are in fact what gives most I/O performance. So one does not have to traverse seeking where to get the thing to input this MIDI message to etc., because everything related would be in the same component. A DAW is so large program eventually that not everything can likely be kept in memory and they cannot be kept close tho each other. So this introduces having to search for components.

If I'd study by analogy from operating systems, then it would suggest that one wants a monolithic DAW. Modularized OSes have certain benefits, but they're certainly not the most performant ones.
Last edited by soundmodel on Wed Mar 06, 2024 10:16 am, edited 1 time in total.

Post

Ok, by "least amount of instructions" you mean CPU instructions, or lines of code? That's a huge difference. Also fewer CPU instructions aren't automatically faster.

To make the audio part fast and efficient, everything regarding GUI and business logic must get out of the way (yet still they need to live somewhere, it's all part of the intended behavior of your software). That means you need to decouple that. Therefore, you need good concepts and design patterns that help you do that decoupling, and these will result in more lines of code than just mingling all intended behavior together in a spaghetti mess. You need to understand the ins and out of each design approach to make good decisions about it. Nothing to do with Java Swing. It' the core of what programming actually is.

My advice is to start building something rather than thinking too deeply about all of this beforehand. Build something, and you'll understand.

Post

hugoderwolf wrote: Wed Mar 06, 2024 9:39 am Nothing to do with Java Swing. It' the core of what programming actually is.

My advice is to start building something rather than thinking too deeply about all of this beforehand. Build something, and you'll understand.
But it does because Swing etc. frameworks already do the decoupling. And they say that for 60 FPS or whatever that GUI code and the framework is almost never the cause of slugginess.

However, a DAW is not your typical GUI app. And I have not found a framework that would have such templates that Swing has for general apps.

Or well, technically one could lift JUCE or Mackie Tracktion source and it is expected to be quite close to the most performant varieties.

However, crucially, I haven't found a theoretical explanation as to what must be the most efficient architecture for best audio and midi I/O bandwidth and latency.
Last edited by soundmodel on Wed Mar 06, 2024 11:46 am, edited 2 times in total.

Post

This is all very blurry, and I honestly don't understand what you are trying to say or ask, and what problem you are trying to solve. Can you be more specific, please? What do you want to build? Have you tried building it in practice yet? What challenges have you encountered? What do you want to talk about? UI? Drawing (not the same as UI)? MIDI? Realtime Audio? DSP? General software architecture? Those are all very different topics, and you need to focus on one at a time in order to understand how all of it fits together in practice.

Post

hugoderwolf wrote: Wed Mar 06, 2024 11:17 am This is all very blurry, and I honestly don't understand what you are trying to say or ask, and what problem you are trying to solve. Can you be more specific, please? What do you want to build? Have you tried building it in practice yet? What challenges have you encountered? What do you want to talk about? UI? Drawing (not the same as UI)? MIDI? Realtime Audio? DSP? General software architecture? Those are all very different topics, and you need to focus on one at a time in order to understand how all of it fits together in practice.
No but this has been explored for 10s of years already, so some people know already. No need to reinvent the wheel here. Just cannot find existing source code to make it simple to study. If just REAPER was open source.

But here's e.g. a thesis about DAWs for Android, which is an okay benchmark, since the MVP model can be essentially the same. Here the author notices:
Another problem is heavy UI fragments. The EffectFragment requires a lot of UI
elements: global views, switches, sliders and dynamic text fields. Navigating to
this fragment could be a bit slow on old devices. This problem could be resolved
by rebuilding the EffectFragment in a more modular way. For example using the
RecyclerView class could be the solution to this problem.
https://www.theseus.fi/bitstream/handle ... sequence=2

So yes, how should the UI and the processing code be connected so that the lag is minimized?

It sounds like one wants at the same time keep most stuff easily searchable, but having everything in memory is too heavy. Then when we do search, we don't necessarily want a huge class, if we only use one member of it.

Maybe if one multithreads every component like audio, GUI, input, ... In Reason the multithreaded setting makes a world difference.

Post

soundmodel wrote: Wed Mar 06, 2024 8:30 am I tried to search for source code for the host sequencer part, and while I can find some that work, I was puzzled by seeing some implementations seeming quite slow (even if they're in C) while also reading some papers that discussed e.g. how from-disk reading is done efficiently for samples.
1. These are two different topics (sequencing & sample streaming) which are not related while you do make them related. Stop doing that.

2. How fast/efficient should a sequencer be? Fast enough to work in real-time on a 10-years old computer is good enough, right? So some dude wrote something that worked and put it online. I bet that dude was still in the phase that he wanted to have code that he could understand himself (even after 6 months, which is a true accomplishment) and worked.
Next comes the phase of optimisation. No use optimising something which does not even work. And then from trying to be very efficient come other problems. The headaches never stop...

3. The answer to life, the universe and everything: it depends.
Writing code means making compromises all the time. Often there are three metrics you want to score high on, and you cannot get all three high all of the time. One needs to be sacrificed. That's an analogy on RAS: Reliability, Availability, and Scalability.

Decisions, decisions, decisions...

If you wanna go down that rabbit hole, then go dig that hole for yourself. It's no use exploring holes which others have dug.
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

Maybe one could discover the optimal division by mining DAW usage patterns?

Post

There is not one way of doing something. Every pattern and design decision comes with consequences, and developers decide how they express the behavior they want, and live with and suffer from the consequences. In engineering, you have what some call the "law of conservation of problems". The sum of all problems remains the same. With every design decision, you just turn one problem into another one.

There is no point in trying to understand such code discussions on some specific project without having actual experience with that specific codebase. Hell knows what it is that they call a "fragment" over there.

Build something and you will understand. Focus on something specific!

Post Reply

Return to “DSP and Plugin Development”