Organizing events in a signal chain

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Hi,
I've starting working on FM synth as my first real audio project and came across an issue with the general design and thought I would ask about it here. It's about how to organize the chain of processing events where there are many connected objects.

For instance, I had something like this:
Osca.out(&Oscb, note) where Osca's out() function processes a a frame and also calls it's modulator's out() function in the process. So every node's out() function is called by another in a chain, but this started to become complicated when I started to try to organize it to have lot's of oscillators, envelopes, amplifiers, summers and voice objects.

the other way to do things seemed to be to connect things just with pointers to an output data member in another class, and have the processing events all actuated globally for each frame by some observer object that everything would register with.

It's basically a distinction between having each node actuate another node's process() function, vs having everything connected by just referencing output variables, and having all the node's process() functions called globally.

Has anyone come across this dillema, and is there an established way to do this? The global way seems a little clunky and the order of processing events might not be executed correctly, but with the chained actuation style I was starting to run into problems where I had to hard code extra functionality and things got confusing.

Sorry for the ponderous post and thanks in advance for any help!

Post

Well, there are many ways to solve this and none will be "right one" but it depends on your architecture.

I pref to orginaize this kind of stuff by using two types of entities: streams (or channel, however you want to call it) and filter.
A stream delivers data (does not matter what kind of data) into a filter, which does something with the data (no matter what). A stream has a stream format and a rate. That's it basically.

You have an OSC which can accept a MIDI stream (a series of MIDI events at MIDI control rate) and can output an audio stream (a series of PCM samples at audio rate).
Than you have an AmpEnvelope, it can accept modulation input (a series of PCM at modulation rate), audio and it can output audio.
Then you have LFO, it accept modulation input (a series of PCM samples at modulation rate) and can outpu modulation (a series of PCM samples at modulation rate) ... ect. pp
Afterwards you just connect pins. MIDI stream goes to OSC, then audio stream goes to AmpEnv which gets additional modulation input from the LFO modulation ouput. ..
It's just about connecting pins afterwards, no function calls or poinert access accross objects or whatever. No more OSCa writes to OSCb, but OSCa write to stream and OSCb reads from stream.

Post

Yes, I ended up doing something like that, connecting things as needed.

I was wondering about the top down approach mostly because I had seen it in the source code for things that are supposed to be open ended and modular, like PureData and the java library Minim. In Minim there is code keeping track of the signal chain, and globally tells all the nodes to generate a sample, and I think PureData does something similar where objects have a perform() function that is called externally. I am guessing that people who make emulated modular stuff use some kind of architecture like that, but I'm not sure if its common. Since what I'm making doesn't need to be that flexible, I just did the more straightforward approach.

I'd be interested if anyone did use an observer pattern to control things with a lot of patchable generators and filters.

Post

With observer pattern you mean when you have an Object with DoThing() and an ObjectObserver with OnBeforeDoingThing() and OnDidTheThing()?
I use it quite often, but not on signal processing. I hate callback concepts between "functional blocks" on there.
Works great on little single-threaded systems and usually ends up in a huge mess if it grows.
Nothing wrong with perform() on OSCa, to call into OSCb, to call into EnvA to call into UI, to into OSCa.
.. if the system is small.
If that last call to OSCa causes any kind of issues, like having a mutex locked and UI dispatched to other thread, (=> deadlock) you have a problem.

But as said, there is no "best solution", it all depends on what you plan to you build.
A little single-threaded synth with 2 knobs?
Go for "call and perform() and let everything call into everything until a sample is there".
If you build a DAW this won't work. There you need defined interfaces, and even more important, you need to think about how they interact. stuff like clocks, event queues, streams, ect. pp are your friends on that task.

Post Reply

Return to “DSP and Plugin Development”