DP9 set to get an update, claiming 4X CPU performance...

Audio Plugin Hosts and other audio software applications discussion
Post Reply New Topic
RELATED
PRODUCTS

Post

Dewdman42 wrote:and as you say, perhaps I need to setup a bigger load test to really find out whether DP's existing pre-gen works. Here I saw absolutely zero difference with 7 instrument tracks of Omnisphere2 playing from a midi track
Yeah that's why I never do tests that aren't to failure. Every DAW seems to have different curve in their CPU meter. Live craps out at 80% anything above that and crackles. Older versions of DP post pre rendering anyway would load to 80% then take in almost as many plug ins to get to 95%, i.e. you load 10 instances it gets to 80, then 10 more it's at 95.
I'm assuming you know I mean that a 'to failure test' is where you load multiple versions of a heavy plug in with the same MIDI file in the timeline until it crakles then back off until the audio sounds good. :wink:
Its hard to say what Logic is doing during playback to get 5% better performance then DP. I kind of doubt they are fully pre-rendering, or else performance would be a lot more significant then just 5% better then the real time performance I got with DP just now. But something they are doing is 5% more efficient, with 7 tracks of Omni anyway.
The consensus was that Logic can dynamically set buffers so that only record enabled tracks are running at 128 say, every non record enabled track is running at twice or higher buffer settings. This was mentioned by Ableton developers and plug in developers during a conversation about why Logic completely skunks a real time performance DAW like Live. (it's almost twice as efficient)

Post

In this case I set the buffer to 512 and I don't think I can accept the theory that logIc is dynamically changing the buffer size for certain tracks. First of all the audio device driver has to be informed of the buffer size and device driver polls for that buffer on a fixed schedule. It's a global value representing how often the driver will ask for the next chunk of mixed down audio to play.

Even if it were possible to run some tracks at different buffer sizes and latencies, you wouldn't want to do so because it would sound out of sync.

The way that a DAW goes about filling up that buffer is NOT realtime per say. This Daw or that DAW could go about doing it any number of ways, some ways more efficient then others.

Logic does it 5% more efficiently then DP 9.02 near as I can tell. More likely this is because logic is more optimized for real time performance while DP is performing some work with 5% overhead that in theory results in being able to process more max plugins. I also notice in DP that when I first hit play the first couple of bars the cpu bounces all over the place until it finally settles on a stable cpu usage. Logic seems to get right to the stable CPU usage fairly immediately. Whatever that means. However it's still interesting to me that when DP is in fully real time mode, it seems to still have that 5% overhead and uses exactly the same CPU use as pre-gen'd. It might be different with more plugin instances.

If I get some time I will try doing more of a max out test to both compare logic and DP as well as DP's existing pre-gen technology and after the update comes I will do the test again to compare first gen to next gen and also against logic again. It would be interesting to know if the current DP can actually handle more max tracks then logic, even though with only a few instrument tracks it is 5% less CPU efficient. It's certainly possible that DP shovels data around in a way that will be ultimately more efficient at handling more tracks and plugins while consuming some 5% more CPU overhead to do it.

Also, when I froze tracks, both DAW's dropped to minuscule CPU use. That questions in my mind how much pre-rendering that DP has actually been doing until now, I would think if it were truly pre-rendering tracks then performance should rival the frozen track scenario, even for only a few tracks we should see that improvement. That is like a 350% improvement of CPU use even with just a few tracks. Perhaps the current pre-gen is improving it only a few percent and it takes 100 instrument tracks or 100 plugin instances to see the difference, and perhaps next gen is a 10x improvement on that in some way and hopefully without increasing that 5% overhead to get it, we'll see. One can hope.
MacPro 5,1 12core x 3.46ghz-96gb MacOS 12.2 (opencore), X32+AES16e-50

Post

Also in my testing I noticed in DP that with pre rendering turned on, DP was not doing much work while sitting idle. This leads me to believe that aMUSEd may be right that this is not much different then Reaper's read ahead rendering, especially in light of the fact that totally frozen tracks were orders of magnitude easier on the CPU.

But again, if MOTU has somehow made this 10x more efficient then it's a reason to rejoice and we shall see.
MacPro 5,1 12core x 3.46ghz-96gb MacOS 12.2 (opencore), X32+AES16e-50

Post

Dewdman42 wrote:In this case I set the buffer to 512 and I don't think I can accept the theory that logIc is dynamically changing the buffer size for certain tracks. First of all the audio device driver has to be informed of the buffer size and device driver polls for that buffer on a fixed schedule. It's a global value representing how often the driver will ask for the next chunk of mixed down audio to play.

Even if it were possible to run some tracks at different buffer sizes and latencies, you wouldn't want to do so because it would sound out of sync.

The way that a DAW goes about filling up that buffer is NOT realtime per say. This Daw or that DAW could go about doing it any number of ways, some ways more efficient then others.

well you are 100% wrong. That's exactly what Logic does, and why it makes no difference for even the largest project PLAYBACK, whether you have your input latency set to 32 samples or 512. Because NON record armed tracks in Logic are all shuffled onto a higher internal buffer.

there are three settings user selectable.. small, medium and large, corresponding to 512 samples, 1024 and finally, 2048.

So if you have small selected and have your main interface buffer at 512, then yes it is all the same. That's true.

But for some reason, logic at the 512 small setting, still runs rings around live's performance when that is hard set to 512, same with S1, etc. Logic's 512 just seems to be more efficient, and the rumours are usually that apple know how to get the best out of the problematic core audio when it comes to performance. It's a reliable, feature rich protocol, that simply isn't efficient vs asio. Especially when combined with throttling which can not be disabled within OSX.

That said i would take core audio any day of the week so i can use a mac and osx. That's good enough for me.

By the way, logic has had a hybrid buffer since 15 years. I used to be able to get minimum double the amount of plugins vs cubase even way back when on logic windows and a single core machine :wink:

I am surprised you find it so hard to believe, when that's a known fact how Logic works, and now so does cubase when asio guard is switched on, and pro tools from V11 onwards.. And samplitude. Emagic was the absolute pioneer of this though. ;)

Post

FWIW, "Hybrid Buffering" is currently being used by several DAW applications.
ProTools started doing it at version 11. Samplitude and Logic have been using "hybrid buffering" for a good while.

A small buffer size is used for all input-monitored tracks.
Tracks simply playing back are using much larger buffer size.
The audio engine keeps all in sync.
It allows great low-latency performance where needed... while also making most efficient use of the CPU.
Logic was also one of the first DAW apps (back when it was cross-platform) that used dynamically allocated DSP processing. IOW, if no audio was playing thru a particular plugin (it was silent), it wouldn't load the CPU.

With a combination of hybrid buffering... and dynamically allocated DSP processing, you can squeeze a LOT more performance from a given machine. Add in "read-ahead-cache" (Reaper, Samplitude)... and you can push it even further.
Jim Roseberry
Purrrfect Audio
www.studiocat.com
jim@studiocat.com

Post

Jim Roseberry wrote:FWIW, "Hybrid Buffering" is currently being used by several DAW applications.
ProTools started doing it at version 11. Samplitude and Logic have been using "hybrid buffering" for a good while.

A small buffer size is used for all input-monitored tracks.
Tracks simply playing back are using much larger buffer size.
The audio engine keeps all in sync.
It allows great low-latency performance where needed... while also making most efficient use of the CPU.
Logic was also one of the first DAW apps (back when it was cross-platform) that used dynamically allocated DSP processing. IOW, if no audio was playing thru a particular plugin (it was silent), it wouldn't load the CPU.

With a combination of hybrid buffering... and dynamically allocated DSP processing, you can squeeze a LOT more performance from a given machine. Add in "read-ahead-cache" (Reaper, Samplitude)... and you can push it even further.
Jim, are we in mental sync? We even mentioned the exact same daw's within minutes of one another.. :-o !!

Post

Internal buffers have to do with internal stuff in logic, NOT the audio device audio buffer. The internal buffer may have something to do with how logic goes about calling plugin callbacks, how much data at a time, but not the final audio card buffer which effects latency.
MacPro 5,1 12core x 3.46ghz-96gb MacOS 12.2 (opencore), X32+AES16e-50

Post

Dewdman42 wrote:Internal buffers have to do with internal stuff in logic, NOT the audio device audio buffer. The internal buffer may have something to do with how logic goes about calling plugin callbacks, how much data at a time, but not the final audio card buffer which effects latency.
no one said otherwise.

We just claimed logic has it's own playback buffering which it does. Honestly you're becoming a real whine now in this topic and seem to be picking everything apart (like my "guess" as to what the DP improvements might be, then you went on to say the exact same thing basically).

Whatever, i'm out.enjoy!

Post

The small, medium and large "process buffer range" setting in Logic has to do with the internal mixing engine and has nothing at all to do with the audio device, or core audio or the latency associated with it. It also has nothing to do with setting record armed tracks to a lower buffer setting. There is no such thing as a hybrid audio buffer. That was all mis-information.

The LogicX manual states the following on page 1981 of the ibooks version:
Process Buffer Range pop-up menu: Determines the size of the buffer used to compute mixes and effects. You can choose between Small, Medium, and Large buffer sizes.
This is perfectly in line with what I said earlier which you labeled 100% wrong:
The way that a DAW goes about filling up that buffer is NOT realtime per say. This Daw or that DAW could go about doing it any number of ways, some ways more efficient then others.
The "process buffer range" setting is simply a buffer around the CPU that buffers the processing of mixing and plugin processing within Logic, and it is yet another global setting, not exclusive only to non record armed tracks. They do call it a "range" and its perfectly possible that Apple (as well as other developers), might use larger internal buffers to process plugin callbacks for non-record armed tracks since they don't need to be responsive to incoming live midi.

Every DAW has its own way of handling how it is going to go about filling up the core audio buffer. The Logic way uses some kind of small, medium or large buffer range for crunching the mixing and processing of plugins (which in its own way is kind of like the Reaper feature aMUSEd talked about and perhaps that is what MOTU's existing pre-rendering engine is as well.) Its look ahead.

But all of that still has to funnel through one fixed size core audio buffer...which for the most part will determine audio latency.

And latency should be consistent, not jittering all over the place or different for one track vs another; if you want to hear audio that sounds in sync.
MacPro 5,1 12core x 3.46ghz-96gb MacOS 12.2 (opencore), X32+AES16e-50

Post

Dewdman42 wrote:In this case I set the buffer to 512 and I don't think I can accept the theory that logIc is dynamically changing the buffer size for certain tracks. First of all the audio device driver has to be informed of the buffer size and device driver polls for that buffer on a fixed schedule. It's a global value representing how often the driver will ask for the next chunk of mixed down audio to play.
Ableton Live 8 had a setting to do this, so although it's a difficult task for sure, it's not impossible.
The setting was called Plug in Buffer Size, you could set it the same as audio(at 512 say) or as low as 32. It's no longer in version 9 of Live, but suffice to say that in my opinion they attempted to copy what Logic had been doing for years. They didn't succeed.

Post

Now Logic is probably doing it the opposite, i.e. Logic set to 128, all non-armed tracks set to 512, but the concept is the same.

Here's the pdf, and yes, it directly goes against what you're saying, the main buffer set to 512 could have the plug in buffer set to lower, as low as 32. For fun I just checked in Live 8, and yep it's there. It's OK to be wrong every once in a while.
(I'm not saying it worked or was a great idea, but let's be fair here, you're saying it's not possible)

https://books.google.com/books?id=HWALA ... -9&f=false

Post

Either you are still conflating two different buffer concepts, or you are still not understanding what I have said or we have a failure to communicate again..

I will try to state it again for clarity, if you are already onboard with this, then hooray.

The setting you referred to in Live....similar to the Process Buffer Range setting in Logic has nada, zero, zilch, nothing to do with the audio card buffer. It is a totally separate buffer related to how the software goes about feeding data to various plugins to process the audio stream internally.

The audio card buffer, which is how the DAW passes the audio to the card, is fixed size. That is the buffer you are setting the size of when you set the classic audio buffer size, which determines audio latency.

These internal processing buffers have more to do with how often and with how big chunks of data, the DAW makes function calls into the plugins to process the DSP...
MacPro 5,1 12core x 3.46ghz-96gb MacOS 12.2 (opencore), X32+AES16e-50

Post

We're not talking about the ASIO buffer size. :wink:
The ASIO buffer size is (obviously) fixed.

Logic, Samplitude, and ProTools (since v11) do indeed use a hybrid buffering technique.

ASIO (input/output) buffer size is fixed for the I/O of audio, but what goes on "in-between" doesn't have to conform to the ASIO buffer size.

ie: Some plugins stick to the ASIO buffer size (non latent).
Other (latent) plugins use larger buffers (ie: often convolution/mastering types - which introduce additional latency - above that of the ASIO buffer size).

In the case of Logic, Samplitude, and ProTools... they're using a larger processing buffer for tracks that are playing back. These are mixed with tracks that are being recorded (software-based input monitoring) at the ASIO buffer size. You can verify this with the developers of each of these applications.
This gives the best of both worlds. Low round-trip latency for software-based input monitoring... and much more efficient CPU use for processing the bulk of the material.
Jim Roseberry
Purrrfect Audio
www.studiocat.com
jim@studiocat.com

Post

by in between I presume you mean whatever the DAW is doing to mix the tracks in software and absolutely it does not have to conform to the audio card buffer size and I'm quite sure they are all doing different magic tricks to try to optimize how efficiently the CPU is used....

I believe Logic's Process Buffer Range is something that configures vaguely how small or big the internal plugin processing buffer will be. A smaller value might require multiple calls to each plugin in order to fill the fixed size audio buffer. A medium size would be 1:1 or close to it I guess from what I have gathered, while large would make longer running calls to the plugins and then be potentially able to feed the audio buffer several times from that...something like that. If this is what machinesworking meant earlier, then yes! There is nothing in the Logic docs to indicate that Logic is using a different process buffer size for non-record enabled tracks, but that seems like it ought to be doing that.

But again...I feel this is pretty much what pre-rendering does...and to a certain extent the Reaper feature aMUSEd mentioned, though Reaper does it 200ms ahead...they are all similar, and you have mentioned some more too that are doing similar things. It is not at all surprising to me that they are all doing some kind of look ahead processing as much as they can to minimize the plugin thrashing.
MacPro 5,1 12core x 3.46ghz-96gb MacOS 12.2 (opencore), X32+AES16e-50

Post

Dewdman42 wrote:Either you are still conflating two different buffer concepts, or you are still not understanding what I have said or we have a failure to communicate again..

I will try to state it again for clarity, if you are already onboard with this, then hooray.

The setting you referred to in Live....similar to the Process Buffer Range setting in Logic has nada, zero, zilch, nothing to do with the audio card buffer. It is a totally separate buffer related to how the software goes about feeding data to various plugins to process the audio stream internally.

The audio card buffer, which is how the DAW passes the audio to the card, is fixed size. That is the buffer you are setting the size of when you set the classic audio buffer size, which determines audio latency.

These internal processing buffers have more to do with how often and with how big chunks of data, the DAW makes function calls into the plugins to process the DSP...
Very flatly, in OSX anyway, (you know the OS where Logic exists? ) the DAW determines the buffer size. So if Live says 512 in the buffer settings, then says 32 in the Plug in Buffer Setting what exactly makes you right about saying that buffer settings can't be different?

You 100% missed the mark when we first started talking about Logic allocating larger buffers to tracks that weren't active and thought we were saying Logic was making the audio card do strange things. But for some godawful reason instead of coming to agreement that indeed, any DAW can add latency (that they themselves ascribe as a buffer) to a track, you're on some pedantic kick. :lol:

Post Reply

Return to “Hosts & Applications (Sequencers, DAWs, Audio Editors, etc.)”