Voice Stealing

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Urs wrote:Dynamic voice allocation is a really bad thing. Both the render thread and the midi thread are typically interrupt level threads where any sort of malloc() or new() are not exactly a good idea.

I have that vision of someone who's got a hanging note... he hits the keyboard with his forearm to get rid of it... but he doesn't... no vocie limit... instead, 100MB of his RAM are now used for voices he'll never play... :hihi:

Also, reasonable optimzation might require to *know* how the memory is layed out. My plugins do a single malloc to allocate all resources required for realtime operation. This allows me to do cache friendly memory accesses which IMHO brings a huge speed boost...

Cheers,

;) Urs
With dynamic voice allocation on a synthesizer is normally NOT meaned calling memory allocation. This can be solved very different and efficient without any memory allocations ...

In my opinion, dynamic voice allocation on synthesizers means especially, that the user is able to adjust the maximum of voices to be played without restarting or initializing the plugin (even dynamically). Kinda voice limiting.

That is in fact implemented on most professional products already (i.e. Kontakt, Albino, Vanguard, only to name some few, which comes to my mind ...)

And this adjusting even requires a decent note stealing algorithm ...

.

Post

On reflection perhaps I did not explain myself as clearly as I should have.

When I said "add more voices" I did not mean dynamically. I was thinking more along the lines mentioned by upsidedown. The user gets to choose the number of voices.

The main pont of my idea was that if a user could choose the number of voices and was aware of the fact that he/she did not have enough, then they were in the position to make the choice of what to do.

I am certainly not sayionmg that other alternativers are bad. Far from it - they are the norm. All I was trying to say is that such methods will eventually fall down.

As an example, suppose I have six voices but the user attempts to play twenty. What happens?

(I accept this is an extreme example)

I only intended to show that given the (ever) increasing performance of pc's and the (ever) expanding amount of ram, I feel it is time to allow the user to choose.

Of course this is unprofessional - any product developed my way would show it's flaws until the user had tweaked it to work correctly. But this is only an issue of user training. In effect, we offer note stealing because everybody else does and any deviation from the norm (can/will) reflect badly on our product.

On the issue of my ideas being uninformative - I accept that I did not clarify my point well enough - but - if thinking outside of the box is uninformative then...

Post

Imagine a very simple fact (which can happen every day in a musicains live):

He has a big arrangement, where he uses a mutitimbre synthesizer. He has merely 24 voices polyphony with this synthesizer, then also the performance of his CPU would reach the maximum never the less.

So he limites the voices for the pad to a maximum of 6 voices, because he will never play more than 3 notes at same time and all other voices are occupied by other patches but...

...this pad has a very long release time ...

Now is the point, where the intelligent voice stealing comes into the game. If it is done right, he will keep his performance acceptable and he is actually able to play this pad with resonable (and interesting) sonic result ...

If it is done wrong, he would not be able to trigger a new note as long the release times are playing, with the final result of problematic errorful holes in the arrangement - missing fundamental notes ...


(An other example are duophonic bass voices. An intelligent voice stealing mechanism can produce very pleasant final musical results - some old synthesizers even used that as a kind of artistic impression by solving the problem of limited capacity with similar intelligent voice stealing mechanisms.)


Intelligent voice stealing algorithms are actually what I would expect from any professional synthesizer, this is based on my dayly experiences.

And it is actually satisfactory solved with nearly all available hardware synthesizers since decades now.

Software synthesizer developers unfortunately often tend to believe, they do invent anything or they are able to say a musician, what is possible and what not.
Most of them probably didn't even work with "real synthesizers" in their live. Else they actually would know all that and see, that most things *are* already invented and realized for a long time now and in fact *are* possible, obviously...

.

Post

Okay then, in that case I'm with you ;)

Nevertheless, I've seen open source synths which *do* call new() when a new note is played and delete() when it stops. Ugh...

;) Urs

Post

I agree with most of your points upsidedown but...

My only intrention in raising this issue was to question if it is now time to change some of our (ingrained) ideas on how things work. Some plug-ins do allow the user to select the number of voices, some don't. The point I was attempting to make was that unless it is a specific feature of the plug-in (i.e. emulating some real hardware synth) to offer limited voices then ALL plug-ins should allow the user to select the number of voices.

My concern was that using note stealing is a cheap way out of providing a more flexible solution. If the plug-in provides both then all is good and well.

Most existing solutions (hardware and software) have been based on available resources. The availability (cost, performance, etc.) of these resources is constantly changing.

As an example, if you have a machine that is over five years old, try to run a modern day game on it. Does it run? Does it run really well? Probably not. Why? Games developers are constantly pushing the boundaries of the available resources. Why not audio developers?

I think we are all agreed that (aside from using it as a specific effect), note stealing is a (necessary) compromise. Because this is the way it has always happened does that mean it should continue this way?

IMHO technology has moved to the point where note stealing should not be necessary - I may be wrong.

Post

Urs wrote:Okay then, in that case I'm with you ;)

Nevertheless, I've seen open source synths which *do* call new() when a new note is played and delete() when it stops. Ugh...

;) Urs
is this really such a bad thing? i'm using juce to make my plugs and the wrapper creates a new MidiMessage (the juce class for midi messages) for every VstEvent that comes in. there is no noticeable impact on performance.
Image

Post

A MIDI message is quite uncritical.

But a synthesizer voice meight consist of several memory intesive data members and structures, which also must be initialized at creation time.

The global memory management would all the time work like crazy (and slow down the entire processing - course memory and initialization tasks are usually very slow). Then you have a kind of permanent processing discontinuity, which is in fact able to "break" the audio process.

If you perform a synthesizer that way, it easiely can happen, that the memory is higly fragmented as longer you play, which affects finally the entire system performance.

In my opinion it is generally "critical" and at least questionable to permanently allocate and deallocate memory in realtime processes for some obvious reasons. If necessary, then it should be done very smart and not with each new note ...

.

Post

Hm, actually, I had made a case here for creating new voices on the fly, but I think it's probably more sensible to just give the synth somewhere upwards of 256 voices to begin with, and don't steal voices at all.
Image
Don't do it my way.

Post

Depends, depends... a Zebra2 voice is somewhat around 100kB... that would be a bit too much if I made 256 of them for each instance ;)

;) Urs

Post

what makes it take up so much memory?
Image

Post

jtxx000 wrote:what makes it take up so much memory?
Sound quality :wink:

;) Urs

Post

:hihi:
Image

Post

:lol:

Yes!
Sound quality can eat up your entire memory and all the CPU capacity. Sound quality is quite impudent.

.

Post

Hehe, in my case it's a trade off - more memory, but way less cpu used to achieve certain things.

Some build mipmap wavetables for every octave (10+ tables per waveform), some do it for every note (100+ tables per waveform) and I do it dynamically for every oscillator (64 tables, none for different waveforms)... which in the end uses less memory in total and gives me more freedom to do certain, uhm, things...

Cheers,

;) Urs

Post

While I am certainly not suggesting creating 256 voices automatically for each insance of any plug-in, let us look at some figures...

100k per voice * 256 voices = 25.6MB (approx)

Most machines nowadays are using 1GB - 2GB of memory.

If we assume a 1GB machine then 25.6MB is approx 1/40 of your total memory. This is nothing in comparison with other OS features and other audio programs that you will be running. If you have a 2GB machine then this represents approx 1/80 of your memory.

On a well set up 1GB machine you should be able to run 20+ instances alongside your sequencer etc. This would give you an insane 5120 voices. If you have a 2GB machine then the figures are incredible.

(Remeber these figures are for a plug-in with a 100k requirement per voice - a 10k requirement per voice would yield 51200 voices and a 1k requirement per voice would yield 512000 voices.)

Of course the real issue would be cpu power.

Post Reply

Return to “DSP and Plugin Development”