Host Audio Engine Logic

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hi,

I'm looking into building a VST Host application. One of the gray areas in my knowledge is what actions (and how) does a Host perform for streaming the Audio and in particular syncing with Midi tracks.

Questions that come to (my) mind are:

1) How do you trigger the 'interrupt' for the start of a new cycle?
From what I hear some use the ASIO buffer callback others use timers. Why would you choose one or the other.

2) How do you manage latency of plugins and what if your processing time exceeds the cycle interval? - Or is that not likely?

3) Do you process effect chains sequentially or is there a parallel mechanism that performs better?

4) What thread synchronization mechanisms should one use for the different interactions?
Especially the interaction to and from the Audio Engine.

5) What is the best way to manage Midi and Audio tracks with routing (side chains, sub groups etc)?
How and where do you merge midi and audio?

6) What plugin fail-safes should one consider?
Plugins do not always behave as one might expect. What are the typical errors and how do you handle them?

7) What are the typical pitfalls for beginning host programmers?

8 ) Any tips or suggestions you might have.

To be clear: I'm not looking for detailed programming advice or religious language debates. I'm looking for a high level sequence of events info - down to pseudo code-isch examples of how to perform these events. It's more about the logic than the code.

Thanx in advance.

EDIT: numbered the question for easy reference.
Last edited by obiwanjacobi on Mon Nov 12, 2012 4:09 pm, edited 2 times in total.
Grtx, Marc Jacobi.
VST.NET | MIDI.NET

Post

obiwanjacobi wrote:Hi,

I'm looking into building a VST Host application. One of the gray areas in my knowledge is what actions (and how) does a Host perform for streaming the Audio and in particular syncing with Midi tracks.

Questions that come to (my) mind are:

- How do you trigger the 'interrupt' for the start of a new cycle?
From what I hear some use the ASIO buffer callback others use timers. Why would you choose one or the other.
The driver should "pump" the host. So, when it's ready to fill a new buffer, it should call the callback in the host, which starts the host calling all processes, in order, to read or fill that new buffer.

- How do you manage latency of plugins and what if your processing time exceeds the cycle interval? - Or is that not likely?
Latency and overall processing power aren't the same thing (though they are related at very low latency settings).

When a DAW can't get enough CPU power to finish all the processes, the buffer underruns and we get the familiar crackling of an overtaxed DAW. It's a "Dr. it hurts when I do this" problem. (The solution being, "Well don't do that.")

- Do you process effect chains sequentially or is there a parallel mechanism that performs better?
You have to process them sequentially if they're topologically ordered one after another. You are free to use parallel processing for parallel tracks that don't interact.

- What thread synchronization mechanisms should one use for the different interactions?
Especially the interaction to and from the Audio Engine.
Black art, with a million different answers. Coming up with a good solution here is how you earn your pay. :)

- What is the best way to manage Midi and Audio tracks with routing (side chains, sub groups etc)?
How and where do you merge midi and audio?
MIDI and audio don't merge. But VST events, including MIDI messages, are timestamped with a "deltaFrames" value that indicates how far into the next process buffer the event should happen. Some hosts will deliver real-time MIDI events with a negative deltaFrames value, reflecting the fact that it's always too late to sync a MIDI even in real time. (But on playback, these events can be shifted back in time so they CAN happen when they were recorded. As to whether that's a good strategy or not is up to you. Reaper for example lets you adjust this to taste. Do you hear it as you heard while recording it? Or better than when you recorded it? There's no one right answer.)

- What plugin fail-safes should one consider?
Plugins do not always behave as one might expect. What are the typical errors and how do you handle them?
That's far too long a topic to get into. Testing will help you discover these.

- What are the typical pitfalls for beginning host programmers?
That they're in way over their heads, having to build a massive GUI along with a real-time (or as close as possible) system.

- Any tips or suggestions you might have.

To be clear: I'm not looking for detailed programming advice or religious language debates. I'm looking for a high level sequence of events info - down to pseudo code-isch examples of how to perform these events. It's more about the logic than the code.

Thanx in advance.
You can start by taking a look at the host example in the SDK. But as with all Steinberg examples, it will be intentionally braindead in several ways as they don't want to share sophisticated techniques with us commoners. Fair enough, as the SDK is only meant to define the interface, not teach us computer science.

Post

Thanx for your extensive answer!

Note: I've numbered my questions so we can reference more easily.

1) Would a sort of double buffering technique be advantageous?
So when the driver calls back to receive a new buffer, the only action you have to perform is copy in a buffer that is ready and waiting...?

2) gotcha.

3) So tracks and splits could be processed simultaneously? Makes sense. You'd want to make use of all those CPU cores! :D

4) Oh dear. :help: :hihi:

5) I did not state the question right. Sorry for that. Question is more about how to manage all that information and what algorithms to use for merging (all midi events and all audio samples - not midi with audio) and when to perform those actions?

6) Hmmm... OK.

7) I think I can handle that :wink:

8 ) I was not planning on looking at the SDK samples, but at the various open source Host - without trying to copy them of course. :hihi:
Grtx, Marc Jacobi.
VST.NET | MIDI.NET

Post

obiwanjacobi wrote:Thanx for your extensive answer!

Note: I've numbered my questions so we can reference more easily.

1) Would a sort of double buffering technique be advantageous?
So when the driver calls back to receive a new buffer, the only action you have to perform is copy in a buffer that is ready and waiting...?
I believe the audio driver defines that, and yes, it has to be at least a double buffer (the audio hardware is playing one while your host is filling the other). You are free to add even more, but that will of course increase latency which is the necessary evil we all fight against.

2) gotcha.

3) So tracks and splits could be processed simultaneously? Makes sense. You'd want to make use of all those CPU cores! :D

4) Oh dear. :help: :hihi:

5) I did not state the question right. Sorry for that. Question is more about how to manage all that information and what algorithms to use for merging (all midi events and all audio samples - not midi with audio) and when to perform those actions?
Well, sorry, far too general a question and with no one particular right answer. This is the guts of your work.

6) Hmmm... OK.

7) I think I can handle that :wink:

8 ) I was not planning on looking at the SDK samples, but at the various open source Host - without trying to copy them of course. :hihi:
You should look at everything and anything you can get source for.

Note that I've not written a host yet. But I think my educated guesses here are correct. Implementing plug-ins illuminates a lot of what the host must be doing.

Post

obiwanjacobi wrote: 1) Would a sort of double buffering technique be advantageous?
So when the driver calls back to receive a new buffer, the only action you have to perform is copy in a buffer that is ready and waiting...?
This adds latency. Latency sucks. You will probably need SOME buffering, but always remember to consider how much latency everything adds.

Post

Use JUCE, it provides solutions to a lot of your questions.

For communication between threads, and the audio device i/o callback, the VFLib library (in my signature) provides a robust set of tools (ThreadWithCallQueue and Listeners).

The SimpleDJ application (in my signature) provides a working example of communicating with the audio i/o callback and the GUI.

Post

thevinn wrote:Use JUCE, it provides solutions to a lot of your questions.
Except be advised it can't yet handle 64 bit audio data, which many customers now expect and demand.

Post

If by many, you mean you, then yes :)
Olivier Tristan
Developer - UVI Team
http://www.uvi.net

Post

otristan wrote:If by many, you mean you, then yes :)
I'm sharing valuable market research with you otristian, you could be more appreciative of it. Believe me, we don't typically get good responses when we respond to customer (or potential customer) requests with "you don't need that". (As the JUCE developer, Jules, does.)

Post

AdmiralQuality wrote:I'm sharing valuable market research
In the interest of sharing, let's also disclose that you were quite rude and insulting in the JUCE forum, and used language unbecoming of a professional. You've made this your personal vendetta and frankly this is borderline trolling behavior.

If he needs 64-bit sample support in processReplacing(), he will look into that himself. There's no need to badger on about 64-bit plugins to anyone interested in developing a host who hears about JUCE.

Besides, his alternative is to write the host himself (which it seems he's willing to do). Given that, it is a trivial matter to start with JUCE and then if 64-bit sample support is necessary, simply add it yourself instead of starting over with everything from scratch.

I've been on this forum for a while and usually you're quite helpful and mature but I don't really understand why you've turned. Personal problems? Rabies?

Post

thevinn wrote:
AdmiralQuality wrote:I'm sharing valuable market research
In the interest of sharing, let's also disclose that you were quite rude and insulting in the JUCE forum, and used language unbecoming of a professional. You've made this your personal vendetta and frankly this is borderline trolling behavior.
No. Jules was rude to me FIRST, after I informed him that (my exact words:) "unfortunately I'm forced to badmouth JUCE". Nothing rude about that, just a simple statement of fact.

And don't call ME unprofessional. I'm the one who SUPPORTS and RESPECTS my customers' requests. Not pooh-pooh their feature (and COMPATIBILITY) requests by telling them they could never hear the difference. That won't get you far in this business.

If he needs 64-bit sample support in processReplacing(), he will look into that himself. There's no need to badger on about 64-bit plugins to anyone interested in developing a host who hears about JUCE.
I disagree. I'm saving anyone that that's important to vital time. (JUCE doesn't really come with a list of features it's LACKING, does it?)

Besides, his alternative is to write the host himself (which it seems he's willing to do). Given that, it is a trivial matter to start with JUCE and then if 64-bit sample support is necessary, simply add it yourself instead of starting over with everything from scratch.

I've been on this forum for a while and usually you're quite helpful and mature but I don't really understand why you've turned. Personal problems? Rabies?
If it was so trivial, why didn't Jules implement it in April? Or even before? (I didn't start that thread.)

And I AM being helpful here. If you need 64 bit audio, FORGET JUCE. That's all!

Post

I don't want to hijack this thread, because I know that if I respond with anything substantive you will just keep pouring more invective out. Anyone who is interested in learning more can just read the conversation for themselves on the JUCE forum:

http://www.rawmaterialsoftware.com/view ... 3&start=45

For those who can't be bothered to wade through all that nonsense, this sums it up succinctly:
AdmiralQuality wrote:Wow, you're a dick. See ya.

Post

thevinn wrote:I don't want to hijack this thread, because I know that if I respond with anything substantive you will just keep pouring more invective out. Anyone who is interested in learning more can just read the conversation for themselves on the JUCE forum:

http://www.rawmaterialsoftware.com/view ... 3&start=45

For those who can't be bothered to wade through all that nonsense, this sums it up succinctly:
AdmiralQuality wrote:Wow, you're a dick. See ya.
Thank you for your self-serving quoting me out of context. I said that after he was a dick. What's your point? (And way to hijack this thread for your own fanboy purposes thevinn! I was replying on-topic with valuable and absolutely true information. YOU are conducting a smear campaign and attempting to organize a pile-on.)

Post

Thanx for the "good" read early in the morning. I couldn't really get the point why that dispute had to be dragged over here - seems the statement was correct (JUCE does not do 64 bit audio).

Anyway, I can keep it short: I'm not going to use JUCE since its not going to be in C/C++. Now back to the subject please.
Grtx, Marc Jacobi.
VST.NET | MIDI.NET

Post

obiwanjacobi wrote:Thanx for the "good" read early in the morning. I couldn't really get the point why that dispute had to be dragged over here - seems the statement was correct (JUCE does not do 64 bit audio).

Anyway, I can keep it short: I'm not going to use JUCE since its not going to be in C/C++. Now back to the subject please.
There's two reasons not to use it then. (And yes, one simple sentence that's true gets piled on by angry fanboys. I'm not sure they're doing JUCE any favors there.)

What language are you planning to code in if not C++? You don't have a lot of options (particularly ones that work reliably).

Post Reply

Return to “DSP and Plugin Development”