Cubase or StudioOne?

Audio Plugin Hosts and other audio software applications discussion
Post Reply New Topic
RELATED
PRODUCTS

Post

I guess I've got more. Not only does S1 discriminate against "hardware instruments," it discriminates against keyboard players in general. Nothing wrong whatsoever with creating beats on tiny keyboards, but there are millions of keyboard players using 76 keys or 88 keys workstations -- meaning they're playing on big keyboard controllers which often have lots of onboard sounds. S1's message to keyboard players all over earth: too bad. You won't be able to adjust what you just heard. We don't care about your workflow. Dummy down cause that's what most people do.

Post

Studio One v3 has a lot of compelling features. Quite frankly Track Transform is the one killer feature that keeps me interested in Studio One. Working with virtual instruments and plugins I always come up against CPU issues, and Track Transform is the most elegant solution for this I have seen.

However, when it comes to getting work done, it's Cubase. Everything you need to get work done is there, and I love the MIDI and Chord Track implementations.
Bitwig Certified Trainer

Post

Studio One all the way! Too many things to list in its favor. Was a dedicated Cubase user since the very beginning (Cubase V5). Switched to Studio One a couple years ago, no regrets. :tu:

Post

BostonRav wrote:Studio One all the way! Too many things to list in its favor. Was a dedicated Cubase user since the very beginning (Cubase V5). Switched to Studio One a couple years ago, no regrets. :tu:
There's nothing to regret if you rarely use MIDI or should I say if you only use VSTIs :)

Post

troggg wrote:I guess I've got more. Not only does S1 discriminate against "hardware instruments," it discriminates against keyboard players in general. Nothing wrong whatsoever with creating beats on tiny keyboards, but there are millions of keyboard players using 76 keys or 88 keys workstations -- meaning they're playing on big keyboard controllers which often have lots of onboard sounds. S1's message to keyboard players all over earth: too bad. You won't be able to adjust what you just heard. We don't care about your workflow. Dummy down cause that's what most people do.
Ummmm. Not sure what you're observing. I use hardware synths and 88-key keyboards all the time with Studio One Pro. Have been with S1 since 2010. Is there something in particular you have noticed?

D.
Last edited by scunacc on Wed Jan 25, 2017 1:04 am, edited 1 time in total.

Post

daw.one wrote:
However after running some tests I noticed when playing virtual instruments in Studio One, the real time latency is almost twice as high compared to Cubase. For instance latency at an ASIO buffer size of 128 and sample rate of 44.1 is 6 ms in Cubase while in Studio One it's 10 ms. Even at 32 | 44.1 Studio One hardly goes below 5 ms.

Apparently I'm the only one in the world who's suffering from higher latency in Studio One, because I never hear/see anybody mentioning it. For me it's enough to stick to Cubase for composing though.

Although I'm a huge fan of Studio One's Control Link system, straightforward console, plugin management and project page. If you just want to record your band or (input) quantize everything in your track or only program your music, I couldn't tell which one to pick! But when you compose with a lot of virtual instruments and low latency is important, I recommend Cubase (or at least compare the two on your system).
OK. So, Hmmm. Not sure what you're seeing.

Here's what I have at 128 and 32 for a 16.0.2 interface:
128A.png
32A.png
and with a different interface on the same system:
128B.png


(Forum uploaded won't let me add another attachment for 32, but it's 3.24 input and output for that interface.)
You do not have the required permissions to view the files attached to this post.

Post

Here we go:
32B.png
You do not have the required permissions to view the files attached to this post.

Post

scunacc wrote:OK. So, Hmmm. Not sure what you're seeing.

Here's what I have at 128 and 32 for a 16.0.2 interface:
Those figures are reported by the driver to the host and are the same across all hosts using the same driver. However they don't tell you anything about the time it takes a VST Instrument in real time (not playback) to produce its sound after a MIDI trigger is received per host.

I always had trouble when playing VST Instruments in Studio One. Every MIDI performance I recorded, always was ahead of the beat simply because I compensated for the extra latency in real time. Before I started using Studio One I always used Cubase, so I first blamed my sloppy playing being punished by Studio One's super tight MIDI note data equivalent.

After a lot of struggle and note corrections I decided to measure the time it takes across several DAW's for a VST (Battery 4 & GA 4) to produce a sound after the MIDI trigger is received in real time (not playback, because that one is compensated!).

So in real life this boils down to the time it takes to hear a sound from a VST instrument when the MIDI message enters the host after I hit a key/pad.

I measured two types of latency in my test. I did this because the different hosts treat MIDI differently which would cloud the results holding it against a fixed audio reference (some compensate MIDI, others are more tight, others jitter etc. etc. different story...).

The first measurement is the time between a fixed audio reference click and the sound of the virtual instrument. At each audio reference click also a MIDI event is send.
This basically simulates a person playing along with a fixed audio track.

The second one is a calculation of the time between the received MIDI message and the actual real time generated sound. This is the latency I'm talking about.

Here's an image of the test I did a while ago showing the result of 8 triggers (each 22050 samples) being send to Battery 4 within several hosts at an ASIO buffer size of 128 and a sample rate of 44.1Khz.

Image

The average latency per host (on my system at 128/44.1) between the MIDI trigger receive (MIDI In) and the actual sound output are:
Bitwig = 4.7 ms (average)
Cubase = 5.5 ms (average)
Digital Performer = 6.5 ms (average)
Reaper = 7.6 ms (average)
Studio One = 9.7 ms (average)

For me it's important to keep latency as low as possible when playing VST instruments. Starting with 9.7 ms at an ASIO buffer size of 128 and sample rate of 44.1Khz simply is too much for me.

There's a feature request for True low latency when playing virtual instruments you can vote on (as a PreSonus customer), to encourage the developers to address this.

Post

Same topics again and again :p

Best thing is try it by yourself demoing them and choose by yourself and don't get fooled by others biased opinions.

Good luck

Post

daw.one wrote: For me it's important to keep latency as low as possible when playing VST instruments. Starting with 9.7 ms at an ASIO buffer size of 128 and sample rate of 44.1Khz simply is too much for me.
Thank you for your helpful and extensive explanation of what your concern is. Understood and appreciated.

It's interesting though, as a guitarist, keyboard player, pianist and vocalist, I find that I can usually adapt to latencies even as high as 20ms and compensate without issue. Above that I find that it is noticeable. The threshold for latency awareness psychoacoustically for most people is about 10ms. 10-15 becomes noticeable for some, and 15-20 is noticeable for most. Above 20 pretty much everyone notices :) Even then, many adapt in the 10-20ms range quite happily - and some even higher. There is variation based on the individual too.

Again. Appreciate the time you took in your reply however. :tu:

Post

scunacc wrote:It's interesting though, as a guitarist, keyboard player, pianist and vocalist, I find that I can usually adapt to latencies even as high as 20ms and compensate without issue. Above that I find that it is noticeable.
Yes indeed, adapting to the latency itself isn't a real problem when recording audio. However it's causing a conflict when carving that adaption (human compensation) into MIDI data, because MIDI data will serve another purpose after the recording (playback without the additional real time latency).

For example: I play along an existing sound with a VST Instrument. While recording I want the timing of the new sounds to line up with the existing sounds as good as possible. So I compensate for the real time latency by playing roughly 10ms ahead. Since the MIDI data is triggered almost instantly, each note is recorded roughly 10ms earlier than the actual real time generated sound. Now when I playback that part it does not sound like I played it because the 10ms latency is gone, it's compensated to a marginal latency < 1ms. So the performance is now roughly 10ms ahead of the reference material.

I realize this entire scenario is pretty common when working with VST Instruments and - in a much lesser degree - it also happens to Cubase (without ASIO Latency Compensation). As a solution in Studio One the overcompensation can be nullified by using Record Offset as long as the ASIO latency stays fixed. Still at the end of the day, I think a DAW should (be able to) come close to the lowest possible latency the audio driver can handle, to achieve the most responsive experience possible within the limitations of the technology.

On my system Studio One clearly isn't going all the way to the limits by responding rougly 4ms slower than Cubase. 4ms itself is of course marginal. But when I put it in context by saying: "The real time sound of a VST Instrument in Studio One is responding 75% later (slower) than the same VST Instrument in Cubase", it's quite a significant amount.

Post

scunacc wrote:
troggg wrote:I guess I've got more. Not only does S1 discriminate against "hardware instruments," it discriminates against keyboard players in general. Nothing wrong whatsoever with creating beats on tiny keyboards, but there are millions of keyboard players using 76 keys or 88 keys workstations -- meaning they're playing on big keyboard controllers which often have lots of onboard sounds. S1's message to keyboard players all over earth: too bad. You won't be able to adjust what you just heard. We don't care about your workflow. Dummy down cause that's what most people do.
Ummmm. Not sure what you're observing. I use hardware synths and 88-key keyboards all the time with Studio One Pro. Have been with S1 since 2010. Is there something in particular you have noticed?

D.
I'm not sure how much more specific I and other posters can be. As you know perfectly well if you use your hardware to control VSTis, no problem, a track appears in the mixer and you can immediately raise or lower the volume or change the panning, among other operations.

Conversely, if you record a part that's a sound onboard your hardware instrument, there will be no track of it created in the mixer ... and you will not be able to change volume without committing to audio.

Your workflow is now completely interrupted.

So what's OK about that? Especially in light of the fact Cubase has had this "feature" (quotes cause it's really a necessity) since SX was released around 2003.

I can't put it any simpler than that.

Post

Regarding latency, obviously from my previous posts I'm not a diehard devotee of Studio One, but I'm running a $250 interface (Mackie Onyx Blackbird) and have never had any problem lining up audio, VSTis, or hardware instruments to the beat. I usually run 128 buffer, though I can get down to 64.

So if latency is a constant struggle, there is some other hardware issue or incompatiblility rearing its head, it's not inherent to the software itself.

p.s. just because some software has no problems with latency on a certain setup while other software does doesn't mean there is no hardware issue.

Post

Not tekkie enough to understand why, but yeah, back in the day, S1 midi in real time was an exercise in frustration. Was never right and missed the first note a ton. Ah, but it's not alone. FL studio just recently "fixed" this and I can remember back in the day with tracktion missing the first note. Sonar had major midi issues the last time I used it.

Haven't used cubase in years, not sure but deadmouse says the midi is "so tight" in cubase :hihi:

Post

troggg wrote:Regarding latency, obviously from my previous posts I'm not a diehard devotee of Studio One, but I'm running a $250 interface (Mackie Onyx Blackbird) and have never had any problem lining up audio, VSTis, or hardware instruments to the beat. I usually run 128 buffer, though I can get down to 64.
I only have PCIe RME's at my disposal so I'm very curious what your latency is between the real time MIDI Input and real time generated sound? For example with Mai Tai (but by preference a sample player that works across several hosts).
So I'm talking about the time it takes Studio One to generate a sound in real time from the moment the MIDI trigger is received from an external device (or loopback) at an ASIO buffer size of 128 samples and a sample rate of 44100Hz.

I did the following to test:
At each host I've set up 2 MIDI track and 1 audio track. One MIDI track is playing a trigger at each beat (120 bpm) for 8 beats. The output of that track goes to the input of an Instrument track containing Battery playing a click sound (panned right) on each trigger. The audio track contains the same click sound (panned left) at each beat and is just there for reference.

Now when I record 8 beats in the host and simultaneously record the audio output on an alternate device or with an alternate piece of software I can measure the difference between the sound in the left channel (reference) and the right channel (real time generated sound) and finally deduct the amount of latency introduced by the MIDI triggers that are recorded on the track containing the VST Instrument, I know how much time it took in real time between the moment the MIDI trigger was received and the sound was generated.

So let's say the difference between the reference click and the real time sound is +470 samples and the MIDI trigger was received at +46 samples, the time between the real time generated sound and MIDI trigger is 424 samples (470-64). Which boils down to 9,6 ms at 44.1kHz.

A less accurate but simpler test is loading two different hosts at the same time (if you ASIO device supports multi-client mixing) with the same VST Instrument simultaneously playing the same mono sound. Pan one host left and one host right. If the sound becomes stereo, you know there's a deviation. Downside of this test is, you don't know if the deviation is within the MIDI treatment, audio treatment or both.

Try it, you'll be amazed how differently hosts handle MIDI and VST Instruments.

Post Reply

Return to “Hosts & Applications (Sequencers, DAWs, Audio Editors, etc.)”