Buffer settings explained

Configure and optimize you computer for Audio.
Post Reply New Topic
RELATED
PRODUCTS

Post

Hi guys iv been away from music production for around 3 years now , my wee hobby studio is now in a small cupboard due to expanding family lol
I've now sold all my hardware synths and drum machines .

Im about to buy a new pc after reading all the latest daw benchtests but I'm a bit confused about buffer settings !
If I'm now going completely in the box ( plug ins only ) does buffer settings matter ?
Pete mentioned something on the threadripper review about some mastering guys using low buffer settings . Would that be for those with hardware compressors etc ?

Any help and guidance on this subject would be much appreciated
At the moment I'm either looking at the Ryzen 1700x ( with the buffer issue )
Or if needed the i7 7820x

Thanks

Post

A DAW in a nutshell:

DAW application allocates a piece of memory to store audio data.
That's whats called the buffer.
It passes this block of memory to plugin 1, plugin 1 does some processing and stores processed audio data on the buffer again. Then the data (buffer) is passed to plugin 2, which does same. Then plugin 3 .... unit it arrives at the mixer and your audio chip -> speakers.

The size of this buffer has a couple of implications:
In general, a CPU is better at processing few big chunks of data, rather than many small chunks of data (memory bus, caching, SMT, ect pp). If you select a small buffer size, it means the DAW will process a lot of small chunks of data, instead few big chunks of data.
This is also the reason why decreasing buffer size can increase CPU load, or even cause audio dropouts if CPU cannot process all this little chunks of audio data fast enough.

So why not selecting biggest buffer size possible?
The answer is latency.
Think about what happens when you push the play button.
DAW gives the buffer to plugins, plugins generate 1ms of audio almost instantly and then it sends the buffer to audio device, which will need 1ms to play that 1ms buffer.
So in between the plugins that generate audio and the audio device that plays it, there is 1ms of buffered audio data, aka the latency. If you change a plugin knob, that change will be audible on the speaker 1ms later (since it needs to play buffered data first).
Now let assume you buffer is 1s instead of 1ms.
DAW gives the buffer to plugins, plugins generate 1s of audio almost instantly and then it sends it to audio device, which will need 1s to play that 1s buffer. This means, if you change a plugin knob, it will take 1s until the change is audible on speaker, because audio device still has to play 1s of buffered data from the previos "tick".

It's about the balance between latency and CPU load.
If you don't care about latency, use big buffer settings.
If you need low latency, you need fast CPU, fast memory controller, PCIe audio interface and/or similar stuff so that your system is capable of processing all this little audio chunks.

Post

Thanks for the reply

Post

As outlined above, it's a good run down. Mastering guys, in general, have less concerns about latency as they are working with already audio already in the box and printed as audio.

If you have a midi controller plugged in and are generating audio via VST's, you need to keep the buffer in mind if you want a responsive setup.

Post

Thanks Pete

Post Reply

Return to “Computer Setup and System Configuration”