Where is a synthesizer that can "listen" to another synth and then recreate that sound?

VST, AU, AAX, CLAP, etc. Plugin Virtual Instruments Discussion
RELATED
PRODUCTS

Post

I think it would take some AI that learns from human sound designers emulating patches across synths.

Post

Sampling is the easy answer.

There's granular resynthesis methods, and various additive synthesis methods.

And then there's this: Machine Tongues XVI: Genetic Algorithms and Their Application to FM Matching Synthesis

That was from 1993. More recently:
Made in collaboration with Aphex Twin, the midimutant learns how to program your DX7 synth so you don't have to. Equipped only with a microphone input and midi output, the midimutant runs on a Raspberry Pi and uses artificial evolution to grow new sounds on hardware synthesisers that mimic an example sound you provide.
Listening to the examples, it's actually kind of worse than the Panharmonium. Which is intentionally "wrong" for the sake of being interesting (otherwise it'd just use inverse FFT to resynthesize, instead of a bank of oscillators).

I also seem to recall a hacked Yamaha Disklavier piano that could "resynthesize" sounds by rapidly playing different harmonics at different velocities.

Post

Yes I was going to say....sampler.

Now...what would be far better...is hearing a sound, and the filters, etc. and being able to clone that. Oh, yea, physical modelling. I guess between all of that and resynthesis, we are covered.

Post

Hartmann Neuron

Post

vurt wrote: Tue Jan 21, 2020 2:22 pm what?
that kind of thinking wed still be scratching our arses in trees.
No, it simply means that no one yet has been successful, but that many have tried since it's one of the holy grails.
It's like asking "yo bro's, how come no one's invented free energy yet?"

Post

This is a very interesting topic. I have an idea around this that Id love to build one day, but for now I simply dont have time for it. But the Idea would be to make an FM-izer that can take any sound and resynthesize it so it sounds like its coming from a DX7, possibly with a lot more operators, but possibly each doing quite small mod depths depending on whats the target sound. I have a couple of ideas around how it could be made.

Post

What I would find interesting is not a synth that recreates sound, but a tool that is capable to program any synth to sound as close as possible to a given sound set. A combination of a neural net and evolutionary methods... For sure with current technology its possible...

Post

Google and search for Hartmann Neuron

Post

seriously though... i've been doing this alot with Zampler.
first, i record a C2, C4, and C6 notes from a CPU hungry synth with wavosaur.
then i loop those wavs with tinywaveeditor (TWE) then i run a batch
file i wrote that generates a new Zampler bank folder and FXB file,
generates a patch folder with the patch name and copies the looped wavs
into it, then it creates an fxp and sfz file in the bank folder, ready to
start zampler, open the bank and add the new preset, tweak it to my desire, then re-save it and the bank. i do this with synths i layer or stack together to create dense, fat sounds for Zampler minus the CPU hit using individual synths.
i am working on polishing up the system into a folder you
copy to the zampler folder called "makepresets" that contains the
scripts and files to do the automation of wavs to zampler presets.
i know theres already programs that do this, but these scripts
make patch making so fast, im up to my 5th bank of 128 presets!
HW SYNTHS [KORG T2EX - AKAI AX80 - YAMAHA SY77 - ENSONIQ VFX]
HW MODULES [OBi M1000 - ROLAND MKS-50 - ROLAND JV880 - KURZ 1000PX]
SW [CHARLATAN - OBXD - OXE - ELEKTRO - MICROTERA - M1 - SURGE - RMiV]
DAW [ENERGY XT2/1U RACK WINXP / MAUDIO 1010LT PCI]

Post

layzer wrote: Sat Jan 25, 2020 2:58 am seriously though...
Why not using this?

https://www.discodsp.com/bliss/

Or this?

https://www.tx16wx.com/

Post

V0RT3X wrote: Tue Jan 21, 2020 5:02 am Interesting ideas so far, but maybe what I'm thinking of is still in the realm of science fiction.
The principle was known already in the last millennium. The first attempt to do it practically was called Acxel. Technically its additive synthesis. The shear amount of parameters was a problem back then for the hardware and software. Nowadays its not a technical problem anymore, but to get your head around it, is still overwhelming...
All those additive synths which do it today have been mentioned already. Its more archeology than science fiction...
That technology does not even need AI (Kemper does not need it either...)
The interesting part of such technology would be to modify the sounds into something else... Morphing additive sounds is easy, but you might want to have different sound describing knobs instead. There machine learning might be useful maybe...

Post

V0RT3X wrote: Mon Jan 20, 2020 9:00 pm I would like to see something like this made, like the Kemper Profiling engine but for synthesizers. I'm sure with neural networks that something could be made to "listen" to a vintage synth, and then attempt to recreate the sound using it's own engine.

This would be a game changer even if the resulting sound came 90% close.

Thoughts?
"it's own engine" would need to be able to emulate every sound it hears.... not sure that exists

prob better either using original synth or an emulation of said synth

Post

You could do this with neural networks and a modular synth engine of some sort, I've thought about it before but never got round to really trying it out

Post

If you wrote a neural net application that could host vsts, it could manipulate the parameters of it, render, compare to a recorded sound and keep refining and it work work with any synth

Post

V0RT3X wrote: Tue Jan 21, 2020 5:02 am Interesting ideas so far, but maybe what I'm thinking of is still in the realm of science fiction.
Probably. And I would suggest “techno-fantasy” is a more accurate name for that genre.

AI isn’t going to do this for us any time soon. All the talk around “AI” is bullshit and misappropriation of terms. What human brains do is really not even close to being emulated.

I’m fairly sure that cleverly-written algorithms could “analyze” and categorize sounds (already being done by music fingerprinting systems), to make libraries of “this is like that”, but it would be heavily flawed because software doesn’t “hear” (and that music fingerprinting system is constantly mis-identifying content as “copyright infringement”, and the business has zero incentive to fix that).

Creating reproductions of sounds from scratch is a much bigger step up on the ladder toward actual intelligence (and complex automation of complex synthesis engines).

I’d also suggest that using additive synthesis algorithms to make “sounds-alike” sounds is... not really “sound design” (though it’s certainly part of it). It’s more like data format conversion, IMO. If that’s all you’re looking for, though...
- dysamoria.com
my music @ SoundCloud

Post Reply

Return to “Instruments”