Studio One 4.1.4 - V - Reason 10.3 - |Performance Compared|

Audio Plugin Hosts and other audio software applications discussion
RELATED
PRODUCTS

Post

I appreciate the effort, but:

1) Such a comparison - statistically speaking - should not be based on single VST. It should include several types of devices (instrument, audio effect, MIDI effect) from different vendors, ideally with automation,

2) Comparing Reason with S1 (or Cubase or Reaper) is not like-for-like, because the latter DAWs use pre-buffering techniques to optimise playback (reduce CPU load) for unarmed tracks. I'm not saying it's unfair or anything, but it's just a different type of DAW. Comparing Reason to Live, Bitwig or perhaps FL would be much more appropriate, because they too rely heavily on modulation, complex routing, elaborate device chains, randomness, ability to change & swap stuff on the fly, etc. Those DAWs were designed from the ground up differently and for different workflows, hence shouldn't be compared without such a clear caveat. It's like saying Jeep is a much better car than Ferrari, because it fares much better on a forest trail. Well, duh?!
Music tech enthusiast
DAW, VST & hardware hoarder
My "music": https://soundcloud.com/antic604

Post

THE INTRANCER wrote: Tue Apr 23, 2019 11:59 pm
spacepluk wrote: Tue Apr 23, 2019 7:40 pm I actually get worse performance with the Maximum setting. And I get dropouts with 130 instances while the Minimum setting runs fine... :shrug:

In Cubase the higher the setting the larger the "background" buffer as far as I know.
That's pretty weird, what audio interface are you using ?
In all the tests I'm using the integrated Realtek with ASIO4ALL. I could try with a Focusrite Clarett 2Pre when I have it next to me but I think the Realtek is as good a baseline as any other.

Post

ShawnG wrote: Wed Apr 24, 2019 12:29 am It is. When the purpose is pure curiosity, and not simply to dump all over your former DAW...

Interesting that you are getting roughly similar instances in S1 as the OP claims, but way more from Reason than he, at more demanding settings than he claims to be running.

Also pretty surprising with the Cubase results, wouldn’t have thought it.
I guess with this kind of narrow test, differences in the CPUs (cache sizes and stuff) show up more clearly.

Yeah, me too. I was also expecting Cubase to be slightly behind S1 because of what I've read in the forums. The bottom line is: it doesn't really matter that much :D

Post

Tronam wrote: Wed Apr 24, 2019 4:33 am Just a quick note that Cubase is only using the 512 sample buffer for live or real-time monitored tracks. For everything else during playback ASIO-Guard is probably running a 1024 or 2048 buffer (especially if set to High), so this puts the others at a disadvantage. Logic does this too, which is a great feature, but kind of promotes the idea that they’re somehow magically more efficient than everything else by a huge margin.
Yeah, the only DAW at a "disadvantage" is Reason and it still does pretty well. S1 also has some kind of hybrid buffering going on.

Post

antic604 wrote: Wed Apr 24, 2019 5:57 am I appreciate the effort, but:

1) Such a comparison - statistically speaking - should not be based on single VST. It should include several types of devices (instrument, audio effect, MIDI effect) from different vendors, ideally with automation,
I don't know, maybe run a few different separate tests and then calculate some score like they do in GS with the "Low Latency Performance Database"
antic604 wrote: Wed Apr 24, 2019 5:57 am 2) Comparing Reason with S1 (or Cubase or Reaper) is not like-for-like, because the latter DAWs use pre-buffering techniques to optimise playback (reduce CPU load) for unarmed tracks. I'm not saying it's unfair or anything, but it's just a different type of DAW. Comparing Reason to Live, Bitwig or perhaps FL would be much more appropriate, because they too rely heavily on modulation, complex routing, elaborate device chains, randomness, ability to change & swap stuff on the fly, etc. Those DAWs were designed from the ground up differently and for different workflows, hence shouldn't be compared without such a clear caveat. It's like saying Jeep is a much better car than Ferrari, because it fares much better on a forest trail. Well, duh?!
Yeah, in my opinion Reason's routing flexibility is well worth the performance compromise. But I don't think these numbers are completely useless either. At least if you understand what you're looking at. In the end is more data to help you make a decision.

Post

spacepluk wrote: Tue Apr 23, 2019 7:40 pm I actually get worse performance with the Maximum setting. And I get dropouts with 130 instances while the Minimum setting runs fine... :shrug:

In Cubase the higher the setting the larger the "background" buffer as far as I know.
The higher settings increase CPU load for me as indicated by the CPU meter in SO, but on the other hand, higher settings protect against dropouts at higher CPU loads - So it kind of evens out, and I might as well just disable dropout protection. I use 256 samples for my buffer size though, which is a good trade off between performance and latency.

Another issue with dropout protection is that it messes with MIDI sync'ed effects like Kickstart - at the Maximum setting, Kickstart is practically ducking on the offbeat rather than downbeat. This doesn't happen when I just increase the ASIO buffer size.

Post

spacepluk wrote: Wed Apr 24, 2019 8:34 am
antic604 wrote: Wed Apr 24, 2019 5:57 am I appreciate the effort, but:

1) Such a comparison - statistically speaking - should not be based on single VST. It should include several types of devices (instrument, audio effect, MIDI effect) from different vendors, ideally with automation,
I don't know, maybe run a few different separate tests and then calculate some score like they do in GS with the "Low Latency Performance Database"
I know, but I'm not the OP. I did such test some time ago with Bitwig, Live, Reason and S1 using 3 or 4 plugins, but I never finished. It's not only about averaging several runs, but mostly about how different VSTs are coded, so they could work different in different hosts. Just a quote from post on ReasonTalk:
During my testing - which is still going, I don't really have time - I've came to two conclusions:

1) It really depends on a VST - for example with The Legend, Monark and RePro-5 my results of Reason 10 vs. Bitwig 2.3 were close, i.e up to 10% less for the former. For Thorn VST, the difference was already bigger, i.e. Reason culd run 70% of the instances Bitwig would, whereas for Analog Ultra VA-2 only 30%

2) However, Reason handles high load more "gracefuly". What I mean is that with Bitwig (and indeed Live, I started testing as well) you can add instances and suddenly it starts glitching, but when you remove one instance it still glitches, sometimes even more so you end up removing 3-5 instances to get to stable playback. I never really know what's the precise number. Also, their GUI falls down on their knees - gets totally unresponsive, like 1 frame per second. With Reason, it's much more stable and predictable - if glitches start, it's enough to just take away the last instance that was added and it's back to normal. Also, the GUI - while slowing down - is still functional.

spacepluk wrote: Wed Apr 24, 2019 8:34 am
antic604 wrote: Wed Apr 24, 2019 5:57 am 2) Comparing Reason with S1 (or Cubase or Reaper) is not like-for-like, because the latter DAWs use pre-buffering techniques to optimise playback (reduce CPU load) for unarmed tracks. I'm not saying it's unfair or anything, but it's just a different type of DAW. Comparing Reason to Live, Bitwig or perhaps FL would be much more appropriate, because they too rely heavily on modulation, complex routing, elaborate device chains, randomness, ability to change & swap stuff on the fly, etc. Those DAWs were designed from the ground up differently and for different workflows, hence shouldn't be compared without such a clear caveat. It's like saying Jeep is a much better car than Ferrari, because it fares much better on a forest trail. Well, duh?!
Yeah, in my opinion Reason's routing flexibility is well worth the performance compromise. But I don't think these numbers are completely useless either. At least if you understand what you're looking at. In the end is more data to help you make a decision.
I'm not saying they're useless! It's just that the OP is a known disgruntled Reason user and he's looking for every opportunity to criticise Reason, so he didn't mentioned those differences in his post nor did he replied to my comments about it here and in other threads.

You know, in the end it doesn't matter which DAW can run how many plugins - it's whether it does what I need and if I'm having fun and if I'm inspired while using it.
Music tech enthusiast
DAW, VST & hardware hoarder
My "music": https://soundcloud.com/antic604

Post

haha! got it :grin:

Post

So the conclusion is "go make music?" :hihi:

Post

reggie1979 wrote: Wed Apr 24, 2019 5:56 pm So the conclusion is "go make music?" :hihi:
Isn't it always, though? ;) :D
Music tech enthusiast
DAW, VST & hardware hoarder
My "music": https://soundcloud.com/antic604

Post

antic604 wrote: Wed Apr 24, 2019 5:57 am

1) Such a comparison - statistically speaking - should not be based on single VST. It should include several types of devices (instrument, audio effect, MIDI effect) from different vendors, ideally with automation,

2) Comparing Reason with S1 (or Cubase or Reaper) is not like-for-like, because the latter DAWs use pre-buffering techniques to optimise playback (reduce CPU load) for unarmed tracks. I'm not saying it's unfair or anything, but it's just a different type of DAW. Comparing Reason to Live, Bitwig or perhaps FL would be much more appropriate, because they too rely heavily on modulation, complex routing, elaborate device chains, randomness, ability to change & swap stuff on the fly, etc. Those DAWs were designed from the ground up differently and for different workflows, hence shouldn't be compared without such a clear caveat. It's like saying Jeep is a much better car than Ferrari, because it fares much better on a forest trail. Well, duh?!
When you're testing and measuring things, you want the measuring tape, thermometer to remain constant or fixed. You don't want these measuring devices to be affected by other factors that will affect the result such as introducing more variables, when making one to one comparisons, because what you end up with is a result that is inconclusive and unclear. If you're going to make a comparison between two entities being tested, then you have to take the median line that both will start from. Changing one deciding key factor / point to determine the difference between two or more entities is all it can take to determine a result. The measuring device in this case is Hive...

Reason & Studio One both have VST2 capable functionality and they are both linear based DAWs. Studio One 4 is capable of advanced routing, automation, audio recording, effect and instrument device chains, swapping stuff on the fly and more but that's not really what's been tested here...
---

And as far as being a disgruntled Reason user lol, that really isn't the case, at least for what will be for the past five years this year... I put it under the microscope way back in 2014 with a redesign and documentation of it all so...it was about time to have another look and compare the latest version with what I use now...in addition to the performance aspect.

My tracks, even on what is a 10 year old CPU technology are typically between 120 to 200 tracks per project, full of effects and instruments. Doing this comparison video sparked off two new tracks in one weekend, so not only was this test a practical test, it was a fun one too..

I need to fix a couple of spelling errors, and re-upload, which is a bit of a pain with having slow broadband issues (1.7 meg when it should be 3.4 meg), over 2 hours to upload now
KVR S1-Thread | The Intrancersonic-Design Source > Program Resource | Studio One Resource | Music Gallery | 2D / 3D Sci-fi Art | GUI Projects | Animations | Photography | Film Docs | 80's Cartoons | Games | Music Hardware |

Post

antic604 wrote: Wed Apr 24, 2019 7:36 pm
reggie1979 wrote: Wed Apr 24, 2019 5:56 pm So the conclusion is "go make music?" :hihi:
Isn't it always, though? ;) :D
Of course, but it's so much easier to just chat online :hihi:

Post

reggie1979 wrote: Wed Apr 24, 2019 5:56 pm So the conclusion is "go make music?" :hihi:
we would never do such thing :D

Post

THE INTRANCER wrote: Wed Apr 24, 2019 7:42 pm When you're testing and measuring things, you want the measuring tape, thermometer to remain constant or fixed. You don't want these measuring devices to be affected by other factors that will affect the result such as introducing more variables, when making one to one comparisons, because what you end up with is a result that is inconclusive and unclear.
While I completely agree with this you'll have to admit this test is very narrow and it's measuring only one aspect of the DAWs performance. In a complex project the results might or might not match the results of the test.

And then there's the fact that raw performance is only one aspect of why one would use a specific tool. Just ask anyone using Macs these days, or vintage tube amps and synthesizers.

Post

THE INTRANCER wrote: Wed Apr 24, 2019 7:42 pm
antic604 wrote: Wed Apr 24, 2019 5:57 am

1) Such a comparison - statistically speaking - should not be based on single VST. It should include several types of devices (instrument, audio effect, MIDI effect) from different vendors, ideally with automation,

2) Comparing Reason with S1 (or Cubase or Reaper) is not like-for-like, because the latter DAWs use pre-buffering techniques to optimise playback (reduce CPU load) for unarmed tracks. I'm not saying it's unfair or anything, but it's just a different type of DAW. Comparing Reason to Live, Bitwig or perhaps FL would be much more appropriate, because they too rely heavily on modulation, complex routing, elaborate device chains, randomness, ability to change & swap stuff on the fly, etc. Those DAWs were designed from the ground up differently and for different workflows, hence shouldn't be compared without such a clear caveat. It's like saying Jeep is a much better car than Ferrari, because it fares much better on a forest trail. Well, duh?!
When you're testing and measuring things, you want the measuring tape, thermometer to remain constant or fixed. You don't want these measuring devices to be affected by other factors that will affect the result such as introducing more variables, when making one to one comparisons, because what you end up with is a result that is inconclusive and unclear. If you're going to make a comparison between two entities being tested, then you have to take the median line that both will start from. Changing one deciding key factor / point to determine the difference between two or more entities is all it can take to determine a result. The measuring device in this case is Hive...
The problem is, the very act of testing these two DAW against each other is inherently flawed anyway. As antic604 said:
It's like saying Jeep is a much better car than Ferrari, because it fares much better on a forest trail. Well, duh?!
Using the same "measuring tape" on the above is bound to reveal severe differences in the two. But I know that Studio One being the best is very important to you, so please - have at it. :party:
Win 10 | Ableton Live 11 Suite | Reason 12 | i7 3770 @ 3.5 Ghz | 16 GB RAM | RME Babyface Pro| Akai MPC Live II & Akai Force | Roland System 8 | Roland TR-8 with 7x7 Expansion | Roland TB-3 | Roland MX-1 | Dreadbox Typhon | Korg Minilogue XD

Post Reply

Return to “Hosts & Applications (Sequencers, DAWs, Audio Editors, etc.)”