Receiving MIDI CC data...

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

How do you guys deal with incoming MIDI data in a plugin?

The correct thing to do would be to intersperse note on/offs & CC data based on their timestamps while processing audio, but does anyone take any short cuts? Meaning, do process some events just once per block instead? Do you thin any of the CC data?

I'm just trying to be a CPU efficient as possible.

Thanks.

Post

Meaning, do process some events just once per block instead?
What you can do is enqueue MIDI events in a priority queue where they will be pulled in order by the audio thread for the next N samples.

Code: Select all

class MidiQueue
{
    void enqueue(MidiMessage m, int timestampInFrames);
    void pullNextMidiMessages(int frames, std::vector<MidiMessage>& outMessage);
}
This allows to process some MIDI every say, 32 samples.
Checkout our VST3/VST2/AU/AAX/LV2:
Inner Pitch | Lens | Couture | Panagement | Graillon

Post

Thank you. Yes, I was thinking of something like that.

But what I'm really interested in is how are other developers treating this data in shipping plugins. What are best practices? As I said, I'm looking for tips to keep the CPU usage as low as possible.

Is quantizing MIDI events every 32 samples a reasonable rule of thumb? Have others found better values or problems with this?

Obviously, the more interruptions in processing the audio will result in a higher CPU load and that's what I'm trying to optimize. And that why I was also asking about the possibility of thinning incoming CC data.

Thanks.

Post

I know this isn't really answering your question, but unless you have already combed through the rest of your real-time code and optimized it, I would say this is a case of premature optimization. I doubt that midi processing will be your bottleneck. Your energy might be better spent elsewhere if you're trying to improve efficiency. Just food for thought.

That being said, I handle it with the method Guillaume described, more or less.

Post

Actually, yes, I have combed through all my code and optimized it. The reason I bring this up is because the greatest speed gain I made was when I changed everything to work a buffer at a time vs per sample. And because of that, I'm hesitant to make those buffer sizes too small for the sake of MIDI accuracy that might not be worth it.

Post

You could do both i.e. use your optimized buffer version when there is no MIDI, and use the per sample version if there is any MIDI to parse.

Post

The correct thing to do would be to intersperse note on/offs & CC data based on their timestamps while processing audio, but does anyone take any short cuts? Meaning, do process some events just once per block instead? Do you thin any of the CC data?
Quite frankly I don't really understand you're question. You apparently already do process audio (a sample player?), in a process called someting like processBlock or similar depending on the framework you're using? And you likewise have access to the received midi data in some sort of queue like Guillaume sketch out?

Your processBlock function (or whatever it's called) probably works in sample space so you have to convert the midi time stamps to sample numbers. Now you check the first midi event in the queue. If its timestamp/sample number is to old for the current processBlock call, skip it, if it's to new, leave it to next processblock call. Only a midievent that lies in the timespan of the current processBlock call need to be dealt with. If it's a noteOn and it's due to happen say 10 samples after the first sample in the processBlock, you put the first sample of the sample to be played in the audiobuffer at that position, and the next at the following etc.

Don't really understand why you'd need to quantize Midi stuff at all, bearing in mind it's much less frequent than the stream of audio samples.

And you cant mess with the order or time of the CC messages in relation to other events. A midi file might quite possible start with Bank Select 1, followed by instrument change event 19 and an CC10 event with value 127, all with a time stamp of 0.0s. That would mean select rock organ from bank 1 and pan it hard right. If you skipped/post pone the CC messages, instead a piano might sound.

Of course not all cc:s need to be sample accurate; changing the vibrato speed e.g. could be outsourced to another thread if more convenient without anyone possible could notice.

Post Reply

Return to “DSP and Plugin Development”