Expressive sound synthesis?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

I had this weird idea today when I was thinking of the limited expressibility in electronic music.

In real instruments, musicians can be thought to be expressing their ideas more genuinely, in the sense that they control the instruments. This is fairly true in electronic instruments as well, but their interfaces are rather typical (keyboard, knobs, faders), eventhough electronic instruments could perhaps allow for far more ingenious expression styles, such has been displayed by some special keyboards (such as these https://www.roli.com/), because well, anything could be thought to be computable.

But putting the controllers aside, are there any sound synthesis or signal processing "bodies" that would have speculated about the ability to transform ideas to sound more directly, than what electronic music production techniques are now (trial and error)?

What I'm thinking is merging what most musicians do, have songs playing in their head and then they try to translate that into music, but make it somewhat machine assisted.

One can take an analogy from visual arts, where it's more direct to "get what you visualized". But that doesn't really work in sound, because sound cannot be controlled like paint.
Last edited by soundmodel on Sun Feb 14, 2016 12:09 am, edited 1 time in total.

Post

Also, most synths are rather technical, compared to real instruments. One does not play an instrument by thinking of FM amount, but rather one plays by listening the sound.

Therefore I think it'd be reasonable, if instruments had more musical parameters and less technical.

Post

Modern touch screens could also be used to make better controls, if just the synths were adapted to different controlling schemes.

Post

Interesting. I think technology is slowly, but definitely getting to that point of expressive programming. Take for instance - http://www.rogerlinndesign.com/linnstrument.html

"What I'm thinking is merging what most musicians do, have songs playing in their head and then they try to translate that into music, but make it somewhat machine assisted."

I'm somehow reminded of Nano technology.

Post

Tim Exile wrote a text about this subject some days ago.

https://medium.com/cuepoint/2016-the-ye ... 3a66b2af73

Post

Perhaps I misunderstand you, but learning to play a physical instrument expressively takes a lot of time and practice. Sure, we see musicians 'just' sitting there and expressing themselves, but they didn't earn that right overnight. It's the same with electronic music, and I think that adding more layers of obfuscation between us and the raw synthesis/DSP/computing in the name of "ease of expression" is actually making electronic music more generic and homogenous.

What is a musical gesture, it is simply a curve in space, placed with intention and sensitivity. Whether that was the vibrato of a guitarist's finger, a wiggle drawn with a mouse on a screen, or an LFO being modulated by something, it's how that expression is put to use which is important. A good, fancy controller will allow better performances in realtime, by allowing more simultaneous input vectors, but if you're talking about just making music with a computer, we already have a staggering amount of ways of getting good results.

And the world of physical instruments can inform the creation of electronic sounds. For example, the human voice never stays on pitch, it bends up into notes and down out of them, it wavers, there are timbral shifts in the tone to emphasise meaning. The same approach can be applied to music constructed on the computer. Here, being able to play other instruments (I'm myself a mediocre keyboard player but have played with some excellent musicians, very enriching), and a good ear for listening and picking up on what music and sound does in general, will be a really good aid.

Again, I may have misunderstood you and you're speaking strictly about live performance, but constructed electronic music is only lifeless in those cases where people take preset sounds, construct music with a series of note-ons and note-offs, mixes it to be as loud as all getout, and calls it a day. There is no shortage of expression in electronic music.

I hope this doesn't come off as aggressive, it's just something I've been thinking about a lot lately.
Last edited by Sendy on Sun Feb 14, 2016 1:02 am, edited 1 time in total.
http://sendy.bandcamp.com/releases < My new album at Bandcamp! Now pay what you like!

Post

Sendy wrote: What is a musical gesture, it is simply a curve in space, placed with intention and sensitivity. Whether that was the vibrato of a guitarist's finger, a wiggle drawn with a mouse on a screen, or an LFO being modulated by something, it's how that expression is put to use which is important. .
Yes, but I think not many instruments respond to those parameters musically. Musicality is left for the one tweaking the parameters. And there's a lot of trial and error.

I'm thinking e.g. synths could be designed with more limited parameter ranges or controlling schemes in order to "be aligned" to certain sounds. When it's known that certain sounds are "in a certain direction" (parameter-wise). Real instruments are often more limited (although that may depend on how one judges their expressivity, even a guitar can generate a variety of sounds, when used wrongly).

Camel Audio Alchemy's Perform section is pretty neat. It enables one to define "parameter sections" between which the plug-in then interpolates when one moves between the squares.

Image

That's like "thinking in sounds" more than "thinking in parameters". Once it has been set up.

Post

Fluky wrote:are there any sound synthesis or signal processing "bodies" that would have speculated about the ability to transform ideas to sound more directly
There are several universities and an international conference on musical instrument interfaces:
http://www.nime.org.
Previous conference papers are at:
http://www.nime.org/archives/
There was a thread on this topic a year? ago in Music Theory.
The next NIME is 11-15 July 2016 in Brisbane, Oz (see you there!)
A week before is the Auditory Display conference in Canberra, Oz
http://icad.org/icad2016/
Also, check out the new products in development from Keith McMillen
http://www.keithmcmillen.com/labs/
s a v e
y o u r
f l o w

Post

I'm thinking something involving speech (or e.g. finger tapping for rhythms) might be most natural type of interface. Given the "musicians have songs playing in their heads (and they may be able to hum it)".

If there was a hum-to-sound params or MIDI translator.
And a text-based interface as well. So one could e.g. write "from point x to point y make this sound more xyzer".

Post

For me it looks like you're narrowing it down too much. For instance:

* Most of "real" ("acoustical") instruments do have any parameters at all... (no knobs for piano, violin, guitar, sax).
* Keyboard is not the only control interface for a synthesizer. There're a lot of alternative interfaces (various wind/breath, string and strike controllers).
* (Curiously first synthesizers did not have any keyboard and parameters at all, were they "more musical" by your definition?).
* Parameters like "FM amount" (or "PWM width" or alike) are tied to a specific synthesis method (often developed to make it possible to make some beeps otherwise not possible in a physical world). And it's not really a problem to hide those "artificial non-musical parameters" behind "more-musical names/macros" (via some higher-level abstraction level) and there are/were such instruments. But it does not seem to make too much sense in a long run (too much guesswork/oversimplification/limitation involved?), otherwise we've already had these things wide-spread and be more popular. (Yet again, most likely because we can't have some kind of "AI" and "universal synth algorithm" behind this so far, thus such abstraction always implies that any "musical" parameter have to be somehow translated to a "technical" parameter(s) of a specific synth algo based on someone else's preferences, thus not necessary meeting your expectations). In simple words, there're hardcore and non-hardcore synth users/players and so far hardcore guys just win.
* After all there're many synth algorithms that are more dedicated to some kind of "physical/real world"-like sound generation (e.g. Physical Modeling) and they have less "technical" synthesis parameters from scratch.
* etc.
than what electronic music production techniques are now (trial and error)?
:)
Therefore I think it'd be reasonable, if instruments had more musical parameters and less technical.
Could you provide a few example of what these "musical parameters" would look like?
Last edited by Max M. on Sun Feb 14, 2016 2:02 am, edited 3 times in total.

Post

Max M. wrote: Could you provide a few example of what these "musical parameters" would look like?
There's the Alchemy Perform section example.

I'm thinking also parameters such as utilized in physical modelling synthesis. Physical modelling synthesizers often don't expose their scientific parameters such as "coefficients", but rather they've been abstracted to meaningful parameters such as "material", "stiffness" in this synth:

Image

I'd like to think about synthesis more in the terminology of "sound expression", rather than in parameter or scientific language.

Post

Sendy wrote:And the world of physical instruments can inform the creation of electronic sounds
A good example is Zoe Keating who unified her cello and programming skills to become a "one woman cello orchestra"
http://music.zoekeating.com
s a v e
y o u r
f l o w

Post

Fluky wrote:I'd like to think about synthesis more in the terminology of "sound expression", rather than in parameter or scientific language.
There is a UK group developing free plugins based a similar idea. "The plug-ins allow you to save and load parameter settings semantically, meaning you can type things like warm, fuzzy and bright into the text box and the computer will try and figure out what you mean!
http://www.semanticaudio.co.uk
s a v e
y o u r
f l o w

Post

Fluky wrote: Camel Audio Alchemy's Perform section is pretty neat. It enables one to define "parameter sections" between which the plug-in then interpolates when one moves between the squares.

Image

That's like "thinking in sounds" more than "thinking in parameters". Once it has been set up.
Yes, this is also the case with NI Kore 2 and the idea of thinking in sounds and timbres (like a painter's palette) rather than just params, was central to the development of Kore (which in turn was a big influence on Alchemy). I wish there were more synths that had such performance controls; it makes it possible to move smoothly between whole plugin states which for me has allowed me to explore a form of music where the focus is the expressive movement between timbres and textures more than notes.

Post

Michael L wrote:
Fluky wrote:I'd like to think about synthesis more in the terminology of "sound expression", rather than in parameter or scientific language.
There is a UK group developing free plugins based a similar idea. "The plug-ins allow you to save and load parameter settings semantically, meaning you can type things like warm, fuzzy and bright into the text box and the computer will try and figure out what you mean!
http://www.semanticaudio.co.uk
Hah. Great. Never heard of these guys. Thanks.

Post Reply

Return to “DSP and Plugin Development”