Expressive sound synthesis?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

nice idea! mobile programming seems more interesting to me than daws.. i think maybe mobile users are still alive..
you come and go, you come and go. amitabha neither a follower nor a leader be tagore "where roads are made i lose my way" where there is certainty, consideration is absent.

Post

Ridan wrote: brain waves and amplification
It seems like this point always gets overlooked in threads like this. Or, no one has ever understood wtf Im talking about. :hihi:
I guess its boring because its not real yet.
I think various forms of bio-feedback are the future of synthesis. We already have a few, but I think brainwave controllers are inevitable in the far future, as MRI tech gets cheaper and better and ubiquitous, and real instruments and the skills to play them, become less so.
It is hard to imagine how far it could go ... but its the idea is that your brainwaves are like midi, and you tune your brainwaves to control the sound. Via bio-feedback from brain>sound>ears>brain, it tunes itself literally to how you want it to sound.
If it sounds like nonsense, check this out;
http://www.medscape.com/viewarticle/855754
https://www.eeginfo.com/what-is-neurofeedback.jsp

The furthest I can conceive of it is just as a mixing/mastering type app, where there is a composition already there for your mind to focus on, and your subconscious nudges it here and there with just a handful of control connections. Master a track by listening to it. :phones:

But what if we find out that its possible for a human to control fifty 'connections' subconsciously, or a thousand... :wheee:
Get a new synth, install it, put on your helmet, listen to it scream at you for a half hour until your brain figures out how to make it sound better, and... thats it. After that, sounds right out of your head. :D

Oh- As always, I am seeking investors. ;)
ImageImageImageImage

Post

itoa wrote:I made something quite expressive :) Impaktor - transients are sampled from microphone and fed into a physical model. Take a look

https://www.youtube.com/watch?v=KjWT5ixO9Ck
So many times I have tapped the desk imagining a better drum sound :hihi:

Post

itoa wrote:I made something quite expressive :) Impaktor - transients are sampled from microphone and fed into a physical model. Take a look

https://www.youtube.com/watch?v=KjWT5ixO9Ck
I love Impaktor! Big fan... :love:
Incomplete list of my gear: 1/8" audio input jack.

Post

It'd be the holy grail of sound synthesis if one could "think" or express sound into an additive synth that models the sound that was thought/expressed.

Most artists are "translating ideas" to their medium, but that doesn't really work that straightforwardly in sound, because sound cannot be created like a writing or a painting.

Post


Post

ImageImageImageImage

Post

Historically there have been three rough "branches" of investigation into synthesis: "total timbral control" (e.g. Stockhausen or Xenakis), "acoustic emulation" (most commercialized synthesis algorithms), and "ergonomic player's instruments". (roughly, most popular electronic genres)

From the standpoint of the people wanting total control, we're close to done. You can synthesize any monophonic sound if you can specify it as an evolving series of partials, and you have some options to spatialize it for the listener. There is no requirement there for it to be a fun or easy task: presumably if you're doing this, you have some high-level document describing how you're going to specify each part of the piece and the execution is a brute force labor involving a lot of programming.

People wanting acoustic emulation have a ways to go, because the majority of interfaces remain based on a "note on-note off" playback model. This plus an off-the-shelf MIDI controller gives you all the expressiveness you need for performing clavier and some kinds of percussion(bells do very well), but it doesn't really reflect the microexpressions used in other acoustic instruments, even with the extensive programming going into new sample libraries. As such, pieces written around synthesis tend to bias towards that limited soundset if they are aiming for a believably acoustic texture, and only go into the rest at the expense of the arranger's sanity. But the synthesis algorithms themselves have mostly stabilized on sampling as a baseline, with occasional developments in physical models.

The last group has, in a lot of ways, barely started. Most of the time, synthesized instruments are bolted on to a production. There are fashionable dance-floor sounds that come and go, and productions have become correspondingly more complex and elaborate as tools ease and lower the bar for basic mixing. But these things are tied to the recording-industry mentality of crafting a perfect radio single and so all the tools are built around power - more knobs and patches and presets. "GAS" in synthesizers is driven by the idea that there's always a new sound to be had, and that it will definitely benefit you.

What this group is craving is breakthrough design, ways of thinking about the process that simply haven't been approached before and make a new style of music easy to access, without having the result be a single-use toy synth that people only push the "demo" button on. This is an area where it's not simply a matter of having a "synth module" that you plug your clavier into, it requires thinking about the entire construction of a performance from the ground up and making an interface that fits it all together - hardware, software, presets. Everything has to be designed so that it can be capable of satisfying a lot of cares without also being a space shuttle control panel.

Finding the "original" breakthroughs in that sense is just a matter of how far back you want to go. Higher volume and better amplification defined a lot of instrument developments, acoustic and electronic, from the 19th century onwards. Effects pedals added a lot of character. A starting point for fun, playable "pure synthesis" instruments probably came with sequencing, arpeggiators, and especially auto-accompaniment. Berlin-style sequences don't need complex sounds and they are not hard to notate traditionally either, but the precision and extreme repetition is essential and not something you'd be inclined to perform manually. Ragga was built on the Casio MT-40, and the TR-808 and TB-303 became the basis of a huge swath of EDM. It's easy to play along with a Casio, and only slightly harder to program a drum machine. Accompaniment technology itself has developed, but only subtly, still typically using presets with "intro", "fill 1", "fill 2", "ending" and chord voicing controlled by clavier macros.

Chiptunes and sampling are close followers to auto-accompaniment. Sampling as a way of deriving new music(which would be the "folk" way of doing it) has been squashed with copyright law, and so it's existed in a sort of marginalized position for years, explored by people who will work to get clearance or who intend to avoid wide distribution, and exploited to sell sound-of-the-week packs, but not embraced universally and unreservedly. Chiptunes are accessible in the sense that almost everyone has the hardware, but they are also traditionally very technical and hacker-centric, and not so immediate/improvisational - the barrier is with time and knowledge, not money. Crossover between chip sounds and the rest of the music world tends to occur through sampling because programming the original stuff is so idiosyncratic.

More recent developments tend to be tied with new effects. The autotune sound ruled through the late 00's because it really was a new and different way to use vocals. Distortion has emerged as a major timbral tool, not just a thing you do to play power chords. The effects expand the range of existing instruments without necessitating mastery of a completely new interface. The interface may be the hardest place to innovate.

A lot of new hardware interfaces try to substitute for the clavier, but they can't maximize their expressive potential very well if their focus is on passing around MIDI messages and hoping that the user designs the rest of the workflow. I feel like some of the better developments here occur in academia, where grad students are free to pursue unique instruments for one-hour performances of drones, rumbles, and noise.

We have the problem both of defining what music is, when it can be anything, and what a useful and expressive instrument is, when to be commercially viable it needs to work with the demands of existing players.

Post

highkoo wrote:I think various forms of bio-feedback are the future of synthesis. We already have a few, but I think brainwave controllers are inevitable in the far future, as MRI tech gets cheaper and better and ubiquitous, and real instruments and the skills to play them, become less so.
It still takes a full room, a really loud machine and no room to play inside. So MRI will forever be a dream. EEG may be doable.

Post

Triplefox wrote:..As such, pieces written around synthesis tend to bias towards that limited soundset if they are aiming for a believably acoustic texture, and only go into the rest at the expense of the arranger's sanity. .."GAS" in synthesizers is driven by the idea that there's always a new sound to be had, and that it will definitely benefit you.
"i just wrote this same exact post" :)

several of the points you mention concur with points in a pdf i just published...
http://www.kvraudio.com/forum/viewtopic.php?p=6449939
you come and go, you come and go. amitabha neither a follower nor a leader be tagore "where roads are made i lose my way" where there is certainty, consideration is absent.

Post

AUTO-ADMIN: Non-MP3, WAV, OGG, SoundCloud, YouTube, Vimeo, Twitter and Facebook links in this post have been protected automatically. Once the member reaches 5 posts the links will function as normal.
I'm a bit late to the party (I just signed up).

To solve exactly this problem, I made VoiceOfFaust. (https://magnetophon.github.io/VoiceOfFaust/)

The core ideas are a vocoder and a pitchtracker, but with MUCH more sonic flexibility.

Post

[quote="magnetophon" VoiceOfFaust [/quote]

That looks really useful. Can you build a VST plugin please?

Post

Can you build a VST plugin please?
Faust compiles to VST (and AU, and standalone for linux, osx, ios, android,win, and 30 more) with one command.

The only problem is that at the moment it's receiving the pitch info from another program via OSC.
The master/slave architecture is a workaround, as explained in the above link.

Post Reply

Return to “DSP and Plugin Development”