Composer's Workstation

Configure and optimize you computer for Audio.
User avatar
Tj Shredder
KVRian
1372 posts since 6 Jan, 2017 from Outer Space

Post Tue Jul 10, 2018 5:32 am

This is a very strange thread, discussing performance details mainly important for gamers, not composers. In 2007 I helped a composer to setup a computer, he choose an 8-core Mac Pro to run ProTools. (Before he had something pretty slow I forgot)
After everything was set up, it turned out, that his Wave plug-ins could only deal with 2 cores at a time. So we set it up to only use two cores. He was happy for several years performance wise (and is still using it...
As the OP stated in the opening post, he does not want to make compromises and have a setup which should run for a decade... This is simply unrealistic and a complete waste of money.
After being burned myself and having seen a lot of artists been burned by the same mistake my advice is:
A computer setup is not like a violin or any other physical instrument. Do not invest more than you need now!!!
Do not try to look at the likes as Hans Zimmer or other professionals who invest at least ten times more in people who maintain technology than in gear!
If you can drive a single Mac Pro or now an iMac Pro to its limits, which will not happen if you, as you said, want to grow, you can start to,think about networking more than one machine. With tools like Dante its not that hard and pretty straight forward. (following this thread I am glad as a Mac user I do not need to waste time thinking about technical details completely irrelevant for musicians..; - ),
One example, as I am interested in technical details: I read computer magazines, but never came across the term NUMA. I searched for it and found some articles (keep away from forums about topics you are not familiar with, nothing to learn there, read articles...)
NUMA would not increase latency, it would decrease latency, that is why it was invented. The latency they are talking about is in the range of 100 ns vs 200 ns. Latency musicians are concerned about is in the range of ms (1 million times longer) It is discussed in the context of virtual machines and is only relevant, if one core needs to access memory assigned to a different core.
This sort of problem simply does not arise in an audio machine, especially if you give it lots of RAM...
Get a iMac Pro or a comparable PC and start making music. It is most likely sufficiant NOW. If you touch its limits in lets say X years, you could sell it and get a faster one for much less money than you could get that performance those X years ago. Or depending on your workflow you could prefer a second machine running in parallel...

Kaine
KVRAF
1815 posts since 4 Nov, 2004 from Manchester

Re: Composer's Workstation

Post Tue Jul 10, 2018 8:29 am

Tj Shredder wrote: One example, as I am interested in technical details: I read computer magazines, but never came across the term NUMA. I searched for it and found some articles (keep away from forums about topics you are not familiar with, nothing to learn there, read articles...)
NUMA would not increase latency, it would decrease latency, that is why it was invented. The latency they are talking about is in the range of 100 ns vs 200 ns. Latency musicians are concerned about is in the range of ms (1 million times longer) It is discussed in the context of virtual machines and is only relevant, if one core needs to access memory assigned to a different core.
This sort of problem simply does not arise in an audio machine, especially if you give it lots of RAM...
Yes, that's the point of NUMA, to ensure the most effective addressing of the memory by the CPU.

You have two kinds of NUMA. Soft NUMA handles single chip scenarios and it tries it best to match the data to the nearest physical stick to each core in question. In this scenario where data does need to be shunted away from that memory bank, it's only a short trip and as you say it's not really a problem.

Hard NUMA on the otherhand, different ballgame. Hard NUMA we deal with when we have multiple physical chips, dealing with multiple controllers. Xeon is the obvious example, Threadripper chips are built this way too however.

In this instance if it needs to fall over to another memory bank it has to go from memory controller 1 on CPU 1 to Memory controller 1 on CPU 2 which will then assign it a memory bank to deal with. This is where lag creeps in, and whilst it's not going to affect the majority of users, for anyone working with super tight buffers in real-time like ASIO that's where any lag whatsoever can throw a real spanner in the works.

It's not really about the amount of physcial RAM you have, although admittedly more does help. It's about how clever the software is in managing where the data is at any given point in time. And as much as sequencers are optimized, they simply don't have the sort of low level control over memory allocation that the OS has.

User avatar
Tj Shredder
KVRian
1372 posts since 6 Jan, 2017 from Outer Space

Re: Composer's Workstation

Post Wed Jul 11, 2018 12:25 am

What I also do not understand in a DAW context is concerns about latency. I do live music, there it is very important, but all DAWs I am aware of are capable of latency compensation and a DAW is for mixing and constructing music. Its easy to look into the future of a recorded track. If you do live recording, you either have an interface with direct monitoring, or you have a mixer to feed all musicians without latency...
Latency is no concern at all in a DAW, especially considering the speed of sound needed to travel from the speaker to the ear... If you pushed your set with 100 solo string plug-ins to mimick an orchestra and get crackles, increase your buffer to 8096 or more samples...; - )
Unless the latency varies, but if the variation is within microseconds, you won't be able to hear it...

Kaine
KVRAF
1815 posts since 4 Nov, 2004 from Manchester

Re: Composer's Workstation

Post Wed Jul 11, 2018 1:16 am

Wrong sort of latency entirely, which I guess is where the confusion is coming from. There are many sources and types of latency in any system, but we only need to understand 3 at this point.

1, You're talking about what we refer to as RTL or Real Time Latency, the total round trip time taken between midi or audio input to be sent into a system, processed and returned back out.

2, We also have DPC latency, which is the time taken by any given peice of software to action and complete a command process.

3, Then we have NUMA latency, the time taken for data to be retrived from it's storage source.


With RTL this will manifest much as you descibe with us being concerned with trying to make the system as responsive to the performer as possible. Of course as you note there is real world latency too with the speed of sound and all that and I'm with you on all the points you raised.

With DPC we're talking about that ASIO buffer and the time frame is that it allocates per cycle. It's a buffer, nothing more than a slice of time where it fills up with commands and instructions created by the user via the DAW software and at the end of the time period it will send that data to the CPU to be processed. If the CPU can deal with all the data before the next buffer cycle is sent then everything is working as intended. If the CPU for any reason can not complete the data processing before the next buffer cycle completes then any remaining workload that is pending is simply flushed from the system. This disposed of data manifests as artifacting in the audio playback, namely in those pop's and clicks you hear when a CPU overloads.

With hard NUMA latency this is where the processing requests being sent to the CPU need to look up and reference data being held in memory during the CPU processing cycle.

So with soft NUMA and one CPU we see the following process

Say for instance the sequencer says "Play C#"
The CPU says fine but I need this data from location A
It calls memory controller in location A and finds it
Memory controller in location A hands over the data
The processing cycle continues unhindered.

Now the same process with hard NUMA

The sequencer says "Play C#"
Just like in the other example CPU 1 says "hey memory controller A, sort me out some data"
Then memory controller A says "sorry chief, no can do, I ran out of space on the past cycle, I had to pass it over to Memory controller B"
CPU 1 then ponders life a little and places a call to CPU 2 asking him for the data hook up.
CPU 2 makes a request to his memory controller (B) who then recalls the data from wherever he placed it.
CPU 2 receives the data and slings it back to CPU 1
CPU 1 then gets on with the task in hand.

Now all this happens in a fraction of a fraction of a microsecond. But even with that said, the process of recalling data in example B might take twice as long as the process of calling the data in example A.

Now, if we're doing something like offline rendering then data calls can be made whenever, and they'll arrive whenever. The system will compensate by running other tasks and actions whilst it waits, so performance in that sense is optimized.

However, with RTL and ASIO handling we're working on precise slices of time. You can't process these slices out of order, we're working in real time after all. This means that any delay in the CPU being able to obtain and process the data it needs, will hold up the rest of the actions waiting for that resulting data to be made available.

Now say this happens a few hundred or thousands of times in any given ASIO buffer cycle.
Now, say that happens in every ASIO buffer cycle.

Suddenly your running at 30%/40%/50% less available CPU performance overhead due to a data latency issue. Yes, your RTL response time is unaffected by this. As far as your concerned you hit a key and something comes out of the speakers still in a nice timely fashion, but if your suffering from NUMA latency you'll find that even through you have 2.9ms RTL you might be getting crackling and data curruption with the CPU meter running at 50%, which is where the concern lays.

User avatar
Guenon
KVRian
1486 posts since 17 Jun, 2005

Re: Composer's Workstation

Post Thu Jul 12, 2018 11:09 am

Tj Shredder wrote:Latency is no concern at all in a DAW, especially considering the speed of sound needed to travel from the speaker to the ear... If you pushed your set with 100 solo string plug-ins to mimick an orchestra and get crackles, increase your buffer to 8096 or more samples...; - )
Unless the latency varies, but if the variation is within microseconds, you won't be able to hear it...
I recall you've been at this before :P ... Different latencies, conflicting meanings, etc... Comments like that just make technical issues more opaque for newcomers, but I guess that comes with the territory; as you implied, on forums one can (and probably will) get many types conflicting info. I mean like this: viewtopic.php?p=7040562#p6976947

About the kind of realtime latency that varies based on audio buffer lengths ("increase your buffer to 8096 or more samples") and is usually the most common type of latency mentioned on DAW forums: this is of concern in a DAW to anyone who plays their virtual instruments in realtime. That is, anyone who is actually performing the different elements as a part of the composition process, while the project is playing and evolving. Many composers/producers actually do that -- just like in turn many of them don't. In any case, this sort of latency is a realistic concern in DAW use to anyone who composes like that, and when this is the case, it's mandatory to keep the latency low enough in order to record performances effortlessly and naturally.

Return to “Computer Setup and System Configuration”