LinnThesizer - a synth concept

Official support for: rogerlinndesign.com
RELATED
PRODUCTS

Post

Yes, this really aligns with the far more vague thoughts I had. I've been thinking through the envelope bits after discussion in other thread, in particular John's points.

One thing that struck me is that it seems there are really just two primary modes, captured by your hold v. strike: those sounds that have a one time trigger (piano, guitar) and those that are continuous (wind, strings, etc..) In reality like everything as soon as you start digging into it you find the exceptions that rule ... piano sustain control, electric guitar feedback, the fact that some wind instruments seem to have an initial threshold level, etc.. but that should cover most cases, excluding special cases like instruments that restrike continuously as well as sound designs we haven't even thought of, which is one of my interests.

With respect to timbre (using this here as a general term for overall 'quality' of sound) my sense is that there is a lot of power in being able to model / create multiple instances of various waveforms (using this very generally to mean any fundamental sound component) all controlled by the same input. Sculpture does a really nice job of supporting this. One thing I've fond in playing with synths is that the most richness is often obtained from relatively simple things, like detuning, having slightly different wave forms, etc.. So just to say that to me the general model of layering multiple sound 'engines' (oscillators, samples, etc..) makes sense as something to preserve, but I also get the point about abstracting these pieces away for the musician, without losing their full range of power. (PhasePlant is particularly nice for playing with that kind of thing, especially since you can create macros that can be stand ins for the knobs in Roger's design.)

With respect to envelope, what came to me based on conversation in other thread is that this might be a way to resolve the velocity/pressure paradox, which is why I was so curious about it: Obviously velocity is not a good way to control continuous (hold) sound, but I'm also thinking that pressure is not an ideal way to control episodic (strike) sound.

I'm thinking humans ourselves are wired to treat these as distinct types of motion; for a given movement we can control one or the other but not both... if I am trying to insert a wooden peg into a hole without breaking it, the sense of gently tapping a mallet is different in kind from the sense of gently pushing it in with my hand. So what I'm thinking there is that the envelope model (as opposed to parameterization) itself is different for hold v/ strike, right?

In the hold case, we want to register continuous pressure, with perhaps some "help" from the synth. You might want some smoothing of signal, you might want to be able to provide some kind of transforming curve. But more than anything you don't want to violate the feedback expectations of the user; that a change in direction is always immediately respected and that the amplitude is predictably and closely matched to the actual input. So control aids in that case might be simpler, perhaps some kind of damping control, which with negative value could serve to sharpen attack, essentially predicting what the user "really" wants to do, in a similar way to how brake assist works in a car. Think fly by wire system. They might even be seen as training wheels, or simply a sensitivity control, which Linstrument already has, so maybe you don't need it at all. Or maybe you want them because it gives different instruments a different 'feel'. Then the only thing we are left with from ADSR is release, i.e. what happens when we remove all control, and that might be more a matter for effects, i.e. delay / reverb.

In the strike case, we'd want to register velocity and we can ignore pressure altogether. There I'm not sure the concept of attack has any utility at all, or perhaps it does as a kind of peak softening effect, i.e. sound shape of snare vs timpani hit. However, what happens to the sound after initial strike does need to be modelled. There there are things that the human would not have control over -- resonance/sustain, etc.. the initial shape of the response peak -- and the things a human would -- for guitar that could be shaking the thing, damping strings, hammer-offs and ons (:O!) etc.

For those human controlled post-strike effects, to turn back to the multiple timbre point, there I think you could and would want to model them as separate components using the "hold" model. There the wrinkle is that you'd want to be able to feed in the amplitude signal from the "strike" into this other model, such that initial value would be independent of pressure, but then somehow (?!) transition into pressure control.

Just some random thoughts, hopefully one or two of them made some sense.. but what I'm afraid I've really done is just proposed making things more complicated.

What I think I'm suggesting as bottom line is that we have two separate control areas, one for hold and one for strike, along with a mix control between them -- but key to that would be that they would likely want separate timbre controls.

Post

Roger_Linn wrote: Tue May 31, 2022 11:52 pm Hi all,

After reading some of the responses, I realized that in my effort to keep my concept simple, I wasn’t clear enough. So I added a little more detail to my original post about the idea and what led me to it.
Reading through this immediately lead me to one complex and one simple example. The complex is the EaganMatrix, and the simple is Fullbuckets MPS. It should probably be the EaganMatrix with the UI of the MPS, but with some real knobs… Maybe an add-on you could attach to the upper side of the LinnStrument, and some extra connectors to get it connected to the inside and access it with MPE+…
The EaganMatrix Eurorack module does not have enough knobs for the modifiers and the form factor doesn’t fit…
And it would be nice if its open source…

Post

Roger_Linn wrote: Fri May 27, 2022 11:10 pm
When I’m wearing my musician hat, I’m not so concerned about the synthesis method. I’m more concerned with achieving the sound that I imagine in the shortest time, so I can quickly return my focus to the music. And while I’m lucky that I understand the technical details of various synthesis methods, what about the vast number of musicians who don’t?

What I’m proposing

I’d like to see a synth that permits the creation of a wide variety of rich, complex and accurate types of sounds that the musician seeks, both existing and imagined, but with a minimum of effort and technical knowledge, thereby allowing the musician to focus on music and not engineering.
To give my own experience - I'm what you might call a 'contemporary classical' composer, used to writing music on staves for acoustic instruments. I use 'synthesis' for working with microtones and alternative tuning. What Roger says here is vital for composers like myself, because I have absolutely no idea behind the 'electronics' of synthesisers other than vague notions of what it's doing. I deal with 'sound' and I want to 'shape' sounds. My degree subject was acoustics, so I understand some of the basic ideas, but most musicians in my area of work don't. You might say they only work with 'sampled' sound sets, but think of what they could do if they could compose for the new expressive interfaces such as the LinnStrument via 'sound shaping' synthesis.

Mark

Post

ok i read the update and i'm starting to grasp that we're talking about something that could be really big.
i started making electronic music about 2 decades ago and since then have always been looking for ways to get that acoustic expression that often times seems to be lacking.

i have found some solutions, mostly in the realm of physical modeling and wavetable synths, however only for a small selection of virtual instruments i would want. i know some programming, rarely deal with actual audio synthesis so i'm not too well versed in the technical details of it but i have some very basic understanding of how it works and have done a little sound design here and there. if the intention is making music it can be a very slow and cumbersome process that takes you away from actually playing and composing, not trying to say that its in itself a bad thing, it can be fun and rewarding of course but it shifts your focus and takes time.

additionally i have noticed a trend of 'high quality' sound generation often just meaning 'slather it in compute expensive reverb' which leads to you having lots of different instruments all being played in their own virtual environment and having to render out individual tracks to make it easier for your CPU to handle the full composition but limits your ability to make changes to it on the fly.

i would rather have a dry signal and apply my own reverb on sends to create a specific feel and be more efficient.
lots of synths come with many presets that sound really cool by themselves by using sequencing, arpegiation or creating large noise based swooshes of sound with lots of intricate detail but they aren't really something you can easily integrate into your own compositions. going through tons of presets can often be a hindrance as well because well.. most of them are not what you are looking for right now and you typically only find some approximation of the voice in your head.

i recently bought a touché and i love it, i also bought expressive E's imagine which lets you combine 2 physical modeling algorithms like oscillators to create new sounds. the lié eco system is very interesting as it comes with alot of presets for synths released by expressive E as well as others to more quickly find a sound that works for you, however it doesn't fully solve all of these issues yet.

in the end i still find myself going "ah this is nice and did some of the work but i still have to modify almost all of it to make it work for me" followed by another week of mostly tweaking things instead of playing.
(a global toggle for the reverb would be a simple QoL feature for example)

for expressive instruments it would be really nice to have a kind of open standard for mapping the capabilities of your specific instruments to a synth globally so you don't have to keep doing the same thing over and over again.
for example: some MPE devices like the Linnstrument or the roli seaboard allow for a seamless and intuitive pitchbend across a wide range of steps and work well with PB set to 48 steps, others just let you wiggle the keys a little, which is great for vibrato but would be uncontrollable with that many steps and would prefer just 1. similarly you have the distinction between continuous pressure which is great for bowed or blown instruments while others just have aftertouch which kicks in when you have bottomed out a key, being too delayed for that specific use case. and then of course everything has different sensitivities and players have different styles of using them.

it would be incredibly valuable to set up your hardware capabilities once per synth in a modulation matrix and then have that work across all presets in a sensible way.

i may have strayed a bit from the actual topic at hand here but i feel this is a related issue.

i absolutely love the idea of picking aspects that describe the sound you want and the combination of your choices actually setting things in motion rather than just scrolling through an endless list of presets and finding 3 things you sort of like, then forgetting about the previous 2 and having to go through the list another time to find them again... and then still not being quite happy with the end result.

if you could get a quick approximation of what you are looking for right away and then have the OPTION of going deeper into the weeds whenever you want to, the process of elimination would be so much more efficient and the likelihood of staying in the flow would greatly improve.

Post

Having read the Update, I think the instrument closest to your concept is the legacy Tassman synth by AAS. It was developed by Stephan Tassart who is now at Blackmagic Design. It can truly morph among any type of source. Harm Visser sold an entire orchestra (79 instruments) of presets for it. Of course, its UI is outdated and far predates MPE, but it shows the proof of concept! Modelonia by NuSofting is also in the ballpark.
F E E D
Y O U R
F L O W

Post

As far as envelopes, at least for amplitude, I’ve found that using velocity for an initial transient and pressure for sustain level to be a versatile strategy. If the synth allows input curves then it’s possible to have that transient only kick in on higher velocities. Perfect for quick staccato stabs and slow pad styled attacks. I’m using Drambo on iPad.

Post

Excellent points all! Thanks for all of the contributions to the idea.

This would be an excellent topic of discussion for my monthly LinnStrument Owners Zoom call next Tuesday morning at 10 am California time:

viewtopic.php?t=577761

Post

With regard to timbre, it seems that Modal has done some really innovative things with creating the kind of higher-level constructs to add depth, colour etc.. without requiring deep understanding of the underlying algorithms, and also without sacrificing freedom of exploration -- no mean feat! Thinking two engine design would also be great for experimenting with doing strike on one and hold on the other. (Would love to get my hands on one of these, a Cobalt 8m would seem to be a great match for Linnstrument even just in terms of form factor..)

https://www.youtube.com/watch?v=HmKJ4TOwJ3M

Post

In this recent interview the Roli sound designer (& kvr member) proposes something like the Linnthesizer: https://www.kvraudio.com/interviews/exp ... aban-54704

"I wish for more developers to understand the importance of 'expression-curves' and design their MPE instruments in such a way that an average user could use these 'expression-curves' easily, without having to dig into a mod-matrix or hunt through some hidden pages.

MPE instruments depend on great audio engines, so I'd love to see more Physical Modeling audio engines being developed, but with MPE in mind from the ground up. Engines that mix Physical Modeling with samples, wavetables, resynthesis can provide amazing results
."

This may be the direction synthesizers need to go, with the engine and UI designed to improve the User eXperience by making the parameters more musical than technical. If MPE synths have simpler UIs but richer sounds than non-MPE synths -- whoa, that hardware<>software synergy would be a total game-changer.

Responsive synth technology that improves the experience of playing the controller will probably increase adoption of MPE instruments. One sign of progress will be more quality MPE synth demos by regular musicians, rather than hardware designers, software developers, tech heads, and pro musicians with their programmers.
F E E D
Y O U R
F L O W

Post

i looked up a few suggestions that were made here and some of them, like tassman for example unfortunately are older and no longer supported. i did remember one environment that i think hasnt been mentioned yet and watched a few videos on it, since i don't own it (yet).

MSoundfactory from Melda Productions.

its a modular instrument maker environment with tons of depth and MPE support, currently available for 200$
https://www.meldaproduction.com/MSoundFactory

i have never seen anything comparable in the amount of depth you can get into when making stuff and they have a non-editable lite edition if all you wanna do is play presets made by others.
it also features basically any form of synthesis you can think of, including physical modeling.
here is an example video tutorial that shows modulation between 4 different sets of harmonics with an XY grid on a physically modeled instrument, among other things.

https://youtu.be/IV-7fF_8lOE

Post

I know I keep coming back to the eaganmatrix, but it strikes me that what is described is a slightly more knobby/preconfigured version of that?

Edmund (and others?) has done a ton of work to create some fantastic starting points - if that concept could be expanded upon further, to incorporate control over what used to be called barrels (I forget the current term sorry), which could be tagged from the cc faders within the linnstrument, you have a huge range of highly configurable and expressive sounds.

I went into the Eaganmatrix intending to learn it - and I have to some degree - but what I end up doing more is finding interesting ballpark sounds, using one of the synthesis methods, and refining with either external effects (more likely) or when I can't quite get there tweaking in the editor.

Ultimately my own Linnthesiser is a continuumini controlled by linnstrument via raspi. I'd love to do away with the continuumini and raspi though!

Post

The speculative nature of the original question suggests to me that the answer lies in coordinating the spaces between language, the way we describe sounds, and expectation - i.e. what we expect to hear in response to this description. The only methods that get anywhere near to tackling this incommensurable cognitive divide are of course - Machine Learning/Deep Learning.

Google have already made considerable headway into this area with their Project Magenta:

https://magenta.tensorflow.org/

https://www.youtube.com/watch?v=iTXU9Z0NYoU

I believe this first came to the public’s attention circa 2017. Since that time there has been considerable progress particularly in the field of visual sciences. Methods such as ‘Midjourney’ provide striking examples that demonstrate how language can be interpreted to generate new images. For example:

https://www.reddit.com/r/deepdream/comm ... idjourney/

reddit’s deepdream channel - https://www.reddit.com/r/deepdream/
provide a broad range of very recent examples of this method. It’s fascinating.

Imagine taking the OP’s original question and with this in mind approach the site:

https://www.prosepainter.com/

which combines a textual description, a starting image and some user defined input (colour and position) to produce a set of derived visual outcomes. It’s not difficult to imagine how this paradigm could be applied to sound.

It may be, with Roger Linn’s considerable reputation and brilliant engineering imagination that he might be able to bring something remarkable into being…

Related terms include:

GAN Art
Disco Diffusion
Dall-E
VQGAN+CLIP

This posting for instance has a large list of other related references:

https://www.reddit.com/user/Wiskkey/com ... t_11_2021/

There are of course many other related examples:

https://blog.google/technology/ai/what- ... nstrument/

https://sites.research.google/tonetransfer

https://www.inputmag.com/culture/google ... instrument

https://analyticsindiamag.com/googles-m ... ment-solo/

Post

Good points, all. Google's Magenta has made some good beginnings and the Eagan Matrix is an excellent synth. I look forward to seeing both hopefully evolve into something similar to my stated goal of a sound generator that is able to quickly achieve the non-technical musician's imagined sound with a simple and musically-intuitive interface, thereby not separating him from his musical process.

Post

Roger_Linn wrote: Wed Jun 15, 2022 3:27 pm... my stated goal of a sound generator that is able to quickly achieve the non-technical musician's imagined sound with a simple and musically-intuitive interface, thereby not separating him from his musical process.
It's an ambitious and worthy goal - interfaces are so complex, and in my experience somewhat fluid; I've completely clicked with things years ago that I can longer use now.

I wonder how well I'd personally get on with your goal - I've become so drawn to the accidents/surprises/complexities of modular I'm not sure how I'd cope.

Thinking further, my tIBstrument would be a modular synth you could patch a single voice up but play with multiple channels of polyphony. Someone's bound to do it one day...

Post

If “LinnThesizer” did exist, I might not even use it because I know all the details of synths. The intent is to make it easier for people without my technical skills.

Post Reply

Return to “Roger Linn Design”