How many of you are getting the Osmose?

Anything about hardware musical instruments.
Post Reply New Topic
RELATED
PRODUCTS

Post

BONES wrote: Sun Jan 22, 2023 1:39 am I don't think any of the others are doing that. Roli, for example, have had to make their "5 dimensions of touch" work within the existing MIDI standard, it's not something explicitly enabled by MPE, nor do you need MPE to take advantage of it. In fact, I almost never run anything in MPE mode, yet I still get to use the 5D touch features.
I think I'd put that like this, just to be clearer about what sort of midi implementation is required at both ends to get the full benefits:

If you are playing monophonically, or dont need every single dimension to be independently controlled per voice then completely typical 1 channel midi is enough. What is missing from this mode is per-voice pitch and per voice CC's.

If you want per voice pitch bend and CC's per voice then you use multiple midi channels. And the original midi spec even has specific modes that describe this sort of application of multi-channel midi. Counter-intuitively called 'Mono Mode' (AKA guitar mode). In practice also included in various synths for a different purpose, multi-timbre multi-patch loading and playing each one via a different midi channel - with those synths you can achieve the same result as 'midi mono mode' by loading the very same patch into each timbre slot.

MPE is basically a further refinement of that old classic midi mono mode, and is built on top of those principals. It introduces the concept of one of the midi channels being used as a global channel for midi messages that are supposed to affect all the notes. Adds a couple of zones to that picture. It encourages the use of a much wider default pitch bend range, and methods for detecting and setting that pitch bend range. It provides a standard for which CC to use per voice (CC74 = MPE Y). And describes which aftertouch messages should be used, channel pressure not poly aftertouch (although plenty of MPE controllers and MPE synths still allow both despite what the spec says about this). And it also offers some rules for which circumstances a synth should continue to have its voices affected by some of those signals, eg what happens to the pitch of a bent note once you lift your finger.

MPE does represent a narrowed subset of part of what is actually possible in total using multiple midi channels. eg it only covers a certain number of dimensions of per-note expression, ie only CC74 using the label MPE Y, rather than formally describing a controller and synth pairing where there may be numerous other CC's being used per note in addition to CC74. Theres nothing in the broader midi spec that would prevent more being used, but users wouldnt be able to rely on a MPE synth supporting this (although some do in practice).

MIDI 2.0 is much broader, doesnt involve some of those arbitrary decisions about how many expressive signals a controller can send per note and bumps up the resolution of messages and the number of channels that can exist within a system. And includes a rather fancy system of two way communication and discovery that would enable different parts of your setup to understand each others capabilities without a lot of tedious manual configuration on the users part. But obviously unless/until we have a world where various MIDI 2.0 devices actually exist, this is just an 'on paper' discussion, not something we can experience ourselves.

Post

SteveElbows wrote: Sun Jan 22, 2023 2:08 pm
MPE does represent a narrowed subset of part of what is actually possible in total using multiple midi channels. eg it only covers a certain number of dimensions of per-note expression, ie only CC74 using the label MPE Y, rather than formally describing a controller and synth pairing where there may be numerous other CC's being used per note in addition to CC74.
Logic is like that. You can have as many per note CC's as you want. Logic doesn't specifically support MPE, but rather multi-channel midi.

Post

Has anybody tested the Osmose with Logic?

My workflow would be pretty simple: record the MIDI from a Osmose performance into Logic, then have Logic play back the MIDI and get the same performance. And maybe tweaking the recorded MIDI to correct some bad notes.

Also, are there any rumours of a 61-key Osmose? I don't have a lot of physical space for multiple controllers :)

Post

Nope, but I have a Seaboard R2 incoming. My background started as a guitarist (floating whammy being the biggest "hack" of all times) and wind player. Then I expanded into the Chapmans Stick to cover polyrhythm polyphony. Since I also play steel guitar I think I will be fine with the Seaboard for pitch gliding a few notes within a held chord. I also have an EWI that covers monophonically melody playing exceptionally expressively (driving SWAM instruments or any synth/sampler). For faster non-MPE MIDI lines I prefer playing a Push#2.
Greetings from Sweden
Per Boysen
http://www.perboysen.com
Dell i7Q 3,4 MHz 32 GB RAM. Acer ZenBook Flip. Ableton Push#1, Fractal Audio AxeFx2. EWI, Cello, Chapman Stick, Guitars, Alto Flute, Tenor Sax.

Post

pdxindy wrote: Sat Jan 21, 2023 10:50 pm My biggest hope is that u-he does support MPE+ and then great sounds that shine with the Osmose do not depend only on the EaganMatrix.

An MPE+ Zebra 3 is an exciting concept! :)

And of course, for me in particular, an MPE+ Bazille! :lol:
Since I last spoke about whether normal MPE is good enough, further testing with various synths hasnt changed my opinion, most of the magic is in the Osmoses general design, MPE+ resolution is nice to have but for many types of sounds it does not represent the difference between glorious success and failure.

Some of us have also established that neither the Continuums or Osmoses seem to actually make use of the added MPE+ resolution for pitch, no CC87 messages are generated to turn the pitch resolution into 21 bit values, they remain at 14 bit just like MPE. So that aspect wont be changed if a synth ends up supporting MPE rather than MPE+.

So in practice when talking about MPE+ versus MPE, it mostly boils down to the difference between 7 bit and 14 bit values for MPE Y & Z. Even with the nice range of movement in the Osmoses keys, I still think 7 bit values lend themselves to a lot of expressivity in these areas. It will be interesting to see whether those synth makers who might explore MPE+ reach similar or different conclusions about whether the gains are worth it. If they reach a different conclusion to me then I wont argue with them. But certain DAWs that have their own special way of dealing with MPE, rather than just passing everything through and making no assumptions about what CC values are allowed in MPE streams of MIDI data, will also likely require adaptation to enable MPE+ to work with the synths.

I suspect that what will make a far larger difference than MPE vs MPE+ is patch design and each synths capabilities, as well as technical details of what each synth does in terms of processing and smoothing out incoming midi messages. It is possible that some forms of processing might make things feel more disconnected between our fingers and the sounds than the experience we would want to have when playing an Osmose.

Meanwhile I have found a VCV Rack module that does have a MPE+ mode, so I will be able to do some experiments as to how much difference it makes. I had to fix a bug in the code to get that mode working, and I believe the author of the plugin has now adopted that fix. I was waiting till they made that version available in the VCV rack library before going on about this, and it looks like it is there now: moDllz version 2.1.2 or higher.
Last edited by SteveElbows on Mon Jan 23, 2023 1:12 pm, edited 1 time in total.

Post

SteveElbows wrote: Mon Jan 23, 2023 12:48 pm MPE+ resolution is nice to have but for many types of sounds it does not represent the difference between glorious success and failure.
Of course, I don't think anybody assumed that.
My Amiga 500 with 8bit audio sounded pretty good at the time too... :party:
Good enough is also an old radio receiver - you can still discern the melody and whistle along.

And pretty much every VST will interpolate Midi data to smooth out stepping, so you won't really hear it, even when finetuning a filter - although you will be unable to reach certain positions via 7bit Midi.

But I still want to see this area evolving, so that we work from a high-res basis and can then use of that what we need instead of the other way around.

Sound design definitely will be the main defining point though.

Cheers,

Tom
"Out beyond the ideas of wrongdoing and rightdoing, there is a field. I’ll meet you there." - Rumi
ScreenDream Instagram Mastodon

Post

Well some people have already attributed more of the magic to the MPE+ side of things than I think is really the case, which is why I go on about that.

Since I now have a platform to do some testing with MPE+ resolution in VCV Rack without needing to write any code of my own, I suppose it would be useful to know how any interpolation and smoothing works in VCV Rack and particular modules within it. I suppose if I stick to modules where source code is available, I will be able to see for myself what they are doing.

The other thing we need to consider if comparing other synths to the built in Haken engine, is how even the MPE+ output compares to the internal signals. I dont know how the temporal resolution of the MPE+ output compares to the internal signals, although people do tend to mention that stuff when talking about CV output from Continuums and Osmose via an i2c bus compared to midi.

Post

Tim Shoebridge is on it as well:
Disregard his crashing problems - it's due to the power connector not being plugged in correctly (same as we had ;-) ).
Also his issue with the editor is because he didn't do the second firmware update.
And he only shows half of the key-glide - if he would release the first finger after pressing the second, the note would go up all the way to the second...
With that said, still worth watching since he just plays some lovely sounds:

https://www.youtube.com/watch?v=LxINyHpMjyU
https://www.youtube.com/watch?v=cpatv8DrH44
https://www.youtube.com/watch?v=1LOqGzXMix4

Hopefully he'll do some tutorial on the Eagan Matrix...
"Out beyond the ideas of wrongdoing and rightdoing, there is a field. I’ll meet you there." - Rumi
ScreenDream Instagram Mastodon

Post

And the SonicState video:
https://www.youtube.com/watch?v=T-uUYQ2aZ9M
In depth indeed.
"Out beyond the ideas of wrongdoing and rightdoing, there is a field. I’ll meet you there." - Rumi
ScreenDream Instagram Mastodon

Post

I've reserved mine! I am 15th on the waiting list at my music store!

Post

I'm waiting for financial reasons but I hope to be able to get osmose within a year.

Post

ThomasHelzle wrote: Mon Jan 23, 2023 1:08 pm And pretty much every VST will interpolate Midi data to smooth out stepping, so you won't really hear it, even when finetuning a filter - although you will be unable to reach certain positions via 7bit Midi.
I can upscale an image and it will interpolate smoothly, but it will still be lacking the detail of an original hi-res image of the same subject.

Post

I'm trying to decide whether to get it as soon as it's available, or, wait until they are more readily available.

Post

pdxindy wrote: Thu Jan 26, 2023 3:38 am
ThomasHelzle wrote: Mon Jan 23, 2023 1:08 pm And pretty much every VST will interpolate Midi data to smooth out stepping, so you won't really hear it, even when finetuning a filter - although you will be unable to reach certain positions via 7bit Midi.
I can upscale an image and it will interpolate smoothly, but it will still be lacking the detail of an original hi-res image of the same subject.
But most of that level of detail isnt coming directly from our fingers in the first place, its being created by the synth engine.

We are talking about the difference between 128 values being used to represent MPE Z and then another 128 to represent MPE Y, and 16384 values being used to represent each of those positions. It sounds like quite a big difference, but if I actually sit in front of an Osmose and draw live graphs from the MPE and MPE+ data, it doesnt really feel like the MPE version of the data is lacking the important subtleties of key position. What is missing is detail of movements that are so small as to represent unintended variations that the player is not going to have deliberate mastery of in the first place. A bit of 'random' variation is good but thats the sort of thing that can be added at the synthesis stage, and indeed forms part of the whole analogue vs digital synth debate.

And then we have to consider that of equal or greater importance than these sorts of values, is the temporal aspect of music. Timing is very important, and MPE+ cannot overcome any of the limitations of midi etc, in fact it can make things worse if you use up additional midi bandwidth with all the extra info that makes the messages higher resolution. Haken are well aware of this which is why they talk about some balanced compromises in their MPE+ document, eg prioritise the temporal aspects early on during the note, by initially sending only lower res messages that take up less bandwidth, and then switch to sending the additional data later in the lifetime of the note playing gesture. Even then, temporal resolution is compromised by midi, in comparison to what can be achieved internally assuming there is a connection between the keys and the internal synth that doesnt use midi, has less latency and a much higher bus speed. And when using the midi version of this data, whether it is MPE or MPE+ we have to keep in mind that temporal quality can be compromised by other factors including any processing the DAW and Synth plugin or synth hardware is doing to those signals, including interpolation. It is therefore easy to imagine some synths that might support MPE+ on paper actually being worse in this aspect than some normal MPE synths, if they happen to be less efficient with this side of the picture.

These temporal aspects mean that I will need to be careful if attempting to evaluate the differences between MPE and MPE+ using a suitable VCV Rack plugin. I will need to understand how good the VCV Rack system is in general when it comes to temporal aspects, how it compares to hardware synths, DAWS and soft synths/soft synth plugin format bottlenecks and the sort of processing of messages that has become the norm in plugin land, and whether any specific modules I use are hampering the best possible temporal resolution.

Post

Its also worth considering that the Osmose has an example of very high resolutions in time and space being a bad thing, not a good thing. A combination of very sensitive sensors and the design of the side-to-side pitch wiggle on the Osmose means that human fingers will create very noisy and unstable signals from this dimension on the Osmose. In order to actually get any useful results from this dimension, rather than something that in a large number of scenarios is only good for comedy, they had to create a bending signal stabilisation feature. This feature is smoothing things out and throwing a lot of information away, with great results. But Osmose users can as an experiment turn that right down so that it no longer influences things at all, and the results are a hilarious mess that even the steadiest theremin player would have no prospect of controlling in a musically useful way.

This is not a criticism of the Osmose. It only reflects that most devices of this type we are not given the ability to get such a glimpse of the raw signals for ourselves, eg they dont let users turn such signal stabilisation features down below a certain point. Osmose is different because does allow this, and I actually really enjoyed being able to get some sense of just how sensitive the sensors for this axis are in the Osmose. Its an eye-opener in regards how much processing is required to get things into a useful range, to achieve the right balance of compromises necessary to tame the beast. People will vary in terms of quite what number they want to set this stabilisation option to in order to get the right balance for themselves, and I'm glad Expressive E let us go beyond what the lowest useable value could possibly be.

Post Reply

Return to “Hardware (Instruments and Effects)”