A few new Music Tech ideas

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hey guys and gals!

I had a few ideas for new audio techniques which I thought might be of interest to some of you.

Some of these ideas might be more original than others, the kind of thing that perhaps could be implemented into an original instrument or even have a nice little article written about it.

Instead, I thought I'd open the ideas for discussion on this forum. They've not even been prototyped, much less implemented nor incorporated into some final software, which is where some of you might be interested?

I'll start with the generally more interesting ideas, then get into the more obvious or least original ones, which I've decided to include as well.

XM: (My denotation of 'something-modulation' synthesis in general)

WM or SM: 'Wave modulation' or 'shape modulation.' WM would be an XM where the wavetable position is modulated by another signal. SM would similarly modulate the wave in some other way, say the blend between a sine and a triangle wave.

DM and HM: 'Delay modulation' and 'haas modulation.' DM modulates the predelay time of a duplicated signal (say between 0 and 30 ms, as per haas effect). HM modulates the panning of those signals, that is the stereo width of the haas effect (from -100% to 100% pan). I realise now that 100% could loop back on -100% if you just invert the phase, but it might get a click (I'll get to circular wavetables later).

PNM and MSM: 'Pan modulation' and 'mid/side modulation.' Modulates pan and mid/side. Theoretically, a kind of stereo AM.

Samples/roms in XM: It seems that XM synthesis, like in FM8 for example, tends to be be based around oscillators and/or input signals, but I don't see a reason why samples or roms (as in romplers) couldn't be incorporated in any combination and with any kind of XM.

Other synthesis:

Romtable: Like wavetable synthesis, but playback a short loop of a sample, as per a rompler. The romtable would be a longer sample which changes timbre over time (like go 'wahaw' with a guitar wah pedal) and the shorter rom sample could glide along the sample to change the sound, plus you could set and modulate the length of the playback rom. You could theoretically do this in a regular sampler, so this may just be a matter of naming (or renaming?) something that already exists, in a sense. You could XM the rom position like in WM (this may not need a new name, but you could possibly call it RTM for 'romtable modulation'). You could theoretically XM the rom length as well (LM for 'length modulation').

Circular wavetables: Simple idea, just have the wavetable (or romtable) loop back on itself, so that it can be modulated beyond the limits of the wavetable. Presumably you would use a circular knob which can turn forever if it wishes.

Haas/predelay on osc/fx: Have each oscillator (or generator of some sort) have access to its own haas effect (and panning etc, as is typical). Also being able to predelay (or push forward) these sound generators separately by up to about 150ms would be nice (as I've found that predelaying tracks on the desk is useful, like pushing your snare forward a little and your cymbals back a little, for example). The predelay could also apply to effects. Reverbs often have it, but something like a downsampler generally won't (downsampling also benefits from filtering the wet).

GUI

Ghosts: A second dot or marker which shows where the setting is now after modulation (so it would generally be moving). It may appear outside the marker for the static user setting on a knob, or something analogous for a slider etc.

Hover numbers: Hover over a knob to see the value as a number near the curser, or other information, which is otherwise hidden to reduce clutter. This is probably not that original an idea in the general context of software.

So there you have it. Feel free to discuss, explore, prototype, implement or incorporate any of these ideas as you wish. I only ask that if you do use an idea, it would be nice for 'Jonah Mann' to be acknowledged somewhere for the originating credit.

Perhaps you'd also like to put forth your new and/or eccentric ideas in this thread?

Appendix A
The mental process
So here's how I came up with the ideas. I literally just decided instead of reading (the introductory chapter of BasicSynth was the material), to actually physically take notes like they tell you to at university. Even though I generally knew this stuff (the difference between substractive and FM snthesis, etc), summarising and consolidating the material helped me think about things more clearly and deeply and thus more originally. The moral of the story is that if you want to develop some original concepts, sit and take notes on the stuff you already know.

Cheers :tu:
- Jonno.

Post

Ninja_Edit wrote:WM or SM: 'Wave modulation' or 'shape modulation.' WM would be an XM where the wavetable position is modulated by another signal. SM would similarly modulate the wave in some other way, say the blend between a sine and a triangle wave.
so, "WM" is basically Yamaha's phase modulation, but with wavetables
and "SM" is modulatable blending between different wavetables, like Roland's "vector synthesis" thing
PNM and MSM: 'Pan modulation' and 'mid/side modulation.' Modulates pan and mid/side. Theoretically, a kind of stereo AM.
i can't believe there's not a single synthesizer with a stereo voice structure that has modulatable pan
or maybe i can, would modulating that be useful?
Circular wavetables: Simple idea, just have the wavetable (or romtable) loop back on itself, so that it can be modulated beyond the limits of the wavetable. Presumably you would use a circular knob which can turn forever if it wishes.
your description is a bit confusing, but i'm sure there's at least one existing sampler that can loop a wavetable in one or two ways
i myself have an unfinished synth which incorporates this simple functionality too
Haas/predelay on osc/fx: Have each oscillator (or generator of some sort) have access to its own haas effect (and panning etc, as is typical). Also being able to predelay (or push forward) these sound generators separately by up to about 150ms would be nice (as I've found that predelaying tracks on the desk is useful, like pushing your snare forward a little and your cymbals back a little, for example). The predelay could also apply to effects. Reverbs often have it, but something like a downsampler generally won't (downsampling also benefits from filtering the wet).

GUI

Ghosts: A second dot or marker which shows where the setting is now after modulation (so it would generally be moving). It may appear outside the marker for the static user setting on a knob, or something analogous for a slider etc.

Hover numbers: Hover over a knob to see the value as a number near the curser, or other information, which is otherwise hidden to reduce clutter. This is probably not that original an idea in the general context of software.
i'll comment on these a bit later
So there you have it. Feel free to discuss, explore, prototype, implement or incorporate any of these ideas as you wish. I only ask that if you do use an idea, it would be nice for 'Jonah Mann' to be acknowledged somewhere for the originating credit.
don't get me wrong here, some of your ideas already exist, while others might not exist because of one of a few possible reasons:
1) no one thought of that, in which case you've got an original idea
2) someone thought of that, tested it, and found that it's not useful
3) someone thought of that, but the realtime performance cannot be justified
there may be more

so, i hope you don't seriously expect to be credited for all of these "ideas"
for panning, and haas effects, this requires a synthesizer with a stereo voice structure, and the haas effects specifically require delay lines per voice, per channel
i probably wouldn't put those in a synth today, because i can think of more effective and interesting ways to waste CPU cycles, but in the future, it may be no problem for a polysynth to be fully stereo and afford to have per voice effects in multiple places

for the GUI aspects, some of developers use existing plugin frameworks like juce, iplug, while others roll their own, making the graphics more fancy can actually take a lot of dev time, which can otherwise be spent elsewhere
i've myself figured that it's very useful to "see" the value of a parameter as i adjust it, but i didn't do it with "hovering" text because the framework i use doesn't really have an easy way to make that, instead, i made one place on the GUI which looks like a text display and shows just that, and that's not an original idea, lots of other plugins have something like a "display" .. hardware devices (even midi controllers) have one too

if you just wanted to share/discuss your ideas - that's perfectly fine
i could add a few of mine to the thread, but they might be useless
It doesn't matter how it sounds..
..as long as it has BASS and it's LOUD!

irc.libera.chat >>> #kvr

Post

here's an idea, which can be made a reality if lots of DAW developers like it (and if it doesn't happen to be technically difficult to implement on their side, which i can't judge myself)

the problem: MIDI in and out of the computer, and jitter
a solution already exists to remove the jitter (and latency) from the MIDI output, using a specific plugin (categorized as an instrument) which accepts MIDI input from the DAW, and converts it into audio, which can (should) then be routed to a sepparate audio output, and fed into a converter box that converts it into MIDI, which is thus in perfect sync with the rest of the audio from the DAW, and can be fed into external hardware devices
this scenario already works, to a large degree

going the other direction is not so easy MIDI from outside going into the DAW
a hardware converter can be made, which converts the MIDI into audio that can be recorded on a dedicated audio channel, and a plugin can be made that loads as an "effect", converting that audio into MIDI events which can then be outputted to the DAW, but there are a few problems here
1) some DAWs may not permit an "effect" to have midi output
2) many people would like to synchronize (slave via MIDI start/stop/continue/clock) their DAW to the MIDI from outside, in which case the DAW should process the special plugin in a special way - let it process the audio first, then use the MIDI it outputs, not the other way around
3) most (if not all) DAWs don't have a way to use a plugin as the source of external MIDI data
4) some DAWs interpret MIDI data from plugins or from external hardware in specific ways, like, filtering out certain messages, etc.. the specific plugin here might require to be treated more as if it's an external midi source, not as a mere midi-generating plugin

if this gets resolved, you would be able to use a small converter box on a spare audio channel and a special plugin to get a jitter-free MIDI input into your DAW
It doesn't matter how it sounds..
..as long as it has BASS and it's LOUD!

irc.libera.chat >>> #kvr

Post

Jitter is an issue of accurate timing, and in most cases this is simply not needed as for both live and sequenced inputs quantization can be used in real-time.

Especially for live inputs there is no need to have high timing accuracy because humans are simply incapable of perceiving such timings in any way other than as audible frequencies. Humans can neither accurately produce high precision timings or perceive them.

There are other methods available for the transmission of highly accurate sequence data: MIDI is a live I/O human interface intended to connect piano keyboards and similar instrument interfaces to sound generators. See OSC or similar interfaces which include high precision event time-stamps.

Regarding the idea of delays: many, many (countless in fact) synthesizers already incorporate this, it's called a "comb filter". Given a stereo signal path with a pair of comb (or other related) filters combined with modular modulation routing ("mod matrix" misnomer) this is trivial.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote:Jitter is an issue of accurate timing, and in most cases this is simply not needed as for both live and sequenced inputs quantization can be used in real-time.

Especially for live inputs there is no need to have high timing accuracy because humans are simply incapable of perceiving such timings in any way other than as audible frequencies. Humans can neither accurately produce high precision timings or perceive them.
i can certainly perceive the jitter when i drive a plugin with the midi notes coming from an external sequencer
There are other methods available for the transmission of highly accurate sequence data: MIDI is a live I/O human interface intended to connect piano keyboards and similar instrument interfaces to sound generators. See OSC or similar interfaces which include high precision event time-stamps.
that's all fine, but the list of devices with this "OSC" is small, and the list of classic MIDI devices is much bigger

in any case, this is not so much of an "idea" even, just a personal hope/dream
It doesn't matter how it sounds..
..as long as it has BASS and it's LOUD!

irc.libera.chat >>> #kvr

Post

Well it's unfortunately impossible and completely contradictory to the purpose MIDI is designed to serve.

The real issue is that you're using MIDI for something for which it was not designed while there are many better alternatives. For example, copy the sequence data to the system and run it there using per-sample timing inside your DAW rather than trying to transmit an external sequencer's sequence data in real-time.

It doesn't make sense to attempt to get accurate results with poorly time-stamped MIDI interfaces (including your hardware, interfaces (possibly cables and configuration), software and drivers) when MIDI doesn't have nearly enough bandwidth to get better than 1ms timing accuracy for single notes!
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

antto wrote:i probably wouldn't put those in a synth today, because i can think of more effective and interesting ways to waste CPU cycles, but in the future, it may be no problem for a polysynth to be fully stereo and afford to have per voice effects in multiple places
It is not the future, we do this nearly four years now (since DUNE 2) :) It can actually save CPU cycles, since you can process using SSE, and sounds no longer require multiple voices to achieve a stereo effect. On top of my head I do not know the first synth which had a stereo path for voices, but I would guess that happened at least 10 years ago, if not 20 years.

Richard
Synapse Audio Software - www.synapse-audio.com

Post

Technically there is no real difference other than a bit of redundancy between a "stereo" voice vs. pairs of panned mono voices, especially where everything beyond the very first stage including the oscillators and all modulation is split in two anyway.

This can be a problem in synthesizers that would need to "sync" the voices due to arbitrary modulation timings at reduced modulation rates but for per-sample accurate modulation it doesn't make a difference.

So really the very first poly synthesizers could do such stuff. The DX-7 can (really! you can get each of the 16 voices output independently!), ancient hammond organs had multiple buses internally and so on.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote:Well it's unfortunately impossible and completely contradictory to the purpose MIDI is designed to serve.

The real issue is that you're using MIDI for something for which it was not designed while there are many better alternatives. For example, copy the sequence data to the system and run it there using per-sample timing inside your DAW rather than trying to transmit an external sequencer's sequence data in real-time.

It doesn't make sense to attempt to get accurate results with poorly time-stamped MIDI interfaces (including your hardware, interfaces (possibly cables and configuration), software and drivers) when MIDI doesn't have nearly enough bandwidth to get better than 1ms timing accuracy for single notes!
i'm coming from another point of view:
i have a hardware sequencer that generates midi notes
if i feed this midi into a hardware synth - it will work.. as long as the synth response isn't uber lousy
but i have a synth plugin in the computer.. and no matter how "good" i implement its MIDI response - it doesn't matter because the midi turns into a jittery mess at the DAW level, if not earlier
so i don't want OSC or "better" things, all i want is to be able to have comparable latency/jitter as if my plugin was a hardware synth

as for the other issue about stereo voice structure - i'm sure it's common now, i just won't personally do it because i tend to live in the past, running a core2duo with 2GB ram
It doesn't matter how it sounds..
..as long as it has BASS and it's LOUD!

irc.libera.chat >>> #kvr

Post

The problem is your DAW doesn't include accurate hardware/software. There is nothing wrong with MIDI, with VST or with DAWs in general. The problem is simply that yours is crap enough not to accurately time-stamp MIDI events.

This is like asking to have an ice rink redesigned with padding because you keep bumping into the boards and hurting yourself. You need to learn to skate.

In other words there is nothing we could add to MIDI or any other specification to solve your problem. It would remain exactly the way it is because your system is incapable of delivering properly time-stamped events to the plug-ins.

My commentary was regarding adding precise time-stamping functionality to MIDI itself (or any other specification like VST) is completely pointless because we already have other interfaces designed for high precision timing, which are *surprise!* barely ever used because they're completely unneeded in the vast majority of cases.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Thanks for taking the time to reply, antto. I noticed that you focussed on the aspects that you felt were weak or felt were equivalent to existing technologies. To draw an analogy, academics often try to invalidate one another's research as a part of a healthy research process, so I guess I should have at least partly expected that kind of response. I have a few replies to make however.
so, "WM" is basically Yamaha's phase modulation, but with wavetables
I wasn't clear as to which definition of 'wavetable' I was using. I don't mean a pre-defined, arbitrary, single oscillator wave. I mean it in the way Massive uses it: A longer non-periodic 'function' or wave shape, with the portion played by the oscillator scrollable through the longer table, changing the wave shape.

So to answer your question: Not exactly, no. It doesn't change the phase of a single oscillator shape, it changes the wave shape itself. Since the wave is scrolling through the table, that would appear to have an effect on the phase position, but actually that could be easily compensated for.
and "SM" is modulatable blending between different wavetables, like Roland's "vector synthesis" thing
Checking Wikipedia: "Mixing is frequently done using a joystick, although the point can be controlled using envelope generators or LFOs." I'm talking about using an audible-frequency signal as a modulator, analogous to FM synthesis (which is what I meant by XM in general, if that wasn't clear either). So functionally, it is actually an alternate use, or a kind of 'misuse, hack or abuse' of a related technology, and thus actually a new technology (unless either of us can pinpoint an existing instrument that uses an audio signal as a carrier).
i can't believe there's not a single synthesizer with a stereo voice structure that has modulatable pan
or maybe i can, would modulating that be useful?
Again, modulatable by what? An LFO or an audio signal? As for its use, are AM or stereo effects useful? If so, would they be useful in combination? That the answer may not be obvious to an observer perhaps warrants a research question.
the haas effects specifically require delay lines per voice, per channel
How is it not possible, actually, to effect the sum of the voices from the oscillator? Also, safe but not entirely required assumptions are a) unison and b) polyphony.

I also noticed you didn't have a comment to make about what I referred to as 'romtable synthesis.' Since you were focussing on the weaknesses and limitations of the ideas, am I safe to conclude that you may consider this point the strongest and most promising idea? Then if I am going to direct my efforts somewhere, this may be the direction to look.

Since yesterday, I've had a few more insane thoughts yet to be pulled apart by the community.

X-mod: More on this idea (which I had previously called XM). The general principle is to assign any modulator signal to any parameter of any carrier signal (including itself, as per feedback). Either signals may be an envelope or LFO, as is common, but actually any audible signal can be slowed down to make an "LFO" control signal out of it; a short, looping sample, for example. The modulator signal could be an audible signal, as is used in FM, AM, etc, but again assigned to absolutely any parameter of any signal, some of which I've described, but also assignable to filter cutoff, FX wet/dry, or whatever your heart desires (all of us would love to only be limited by our imagination or creativity and to make many wild discoveries). What might be another interesting idea might be not only using sub-audible or even audible signals, but actually making use of super-audible signals (which may require over-sampling) as either modulator and/or carrier signals.

Predelays: The Haas effect is just one use of a short predelay. There are other ways in which predelays (or conversely, preanticipations) could be used. I mentioned FX wet/dry, but also oscillators or signals relative to each other, or mid/side and/or multiband predelays/preanticipations. You can try this yourself with some frequency splitting on a rich, modulating bass, for example. It can give things just a little more character quickly.

Post

Ninja_Edit wrote:
so, "WM" is basically Yamaha's phase modulation, but with wavetables
I wasn't clear as to which definition of 'wavetable' I was using. I don't mean a pre-defined, arbitrary, single oscillator wave. I mean ...
Xhip already does this. It's called through-zero differentiated frequency modulation which is the same thing as "phase modulation", just applied to the playback frequency of samples.

For example:
https://soundcloud.com/xhip/horriblefm2
Ninja_Edit wrote:
and "SM" is modulatable blending between different wavetables, like Roland's "vector synthesis" thing
Checking Wikipedia: "Mixing is frequently done using a joystick, although the point can be controlled using envelope generators or LFOs." I'm talking about using an audible-frequency signal as a modulator, analogous to FM synthesis (which is what I meant by XM in general, if that wasn't clear either). So functionally, ...
What you're looking for is a modular synthesizer or one with modular modulation routing (misnomer: "mod matrix").
Ninja_Edit wrote:
i can't believe there's not a single synthesizer with a stereo voice structure that has modulatable pan
or maybe i can, would modulating that be useful?
Again, modulatable by what? An LFO or an audio signal? As for its use, are AM or stereo effects useful? If so, would they be useful in combination? That the answer may not be obvious to an observer perhaps warrants a research question.

...
Again, a modular synthesizer. There is a damn good reason these things aren't supported in mainstream synthesizers: they just aren't useful.

Regarding xmod (see above with Xhip), regarding "predelays" and "haas" see my comments about how this is just a modular synthesizer with comb filters. Absolutely trivial. Regarding "haas" specifically: this is something you'll often see where people use different terms for very commonplace things like for example NOXIOUS hydrogen dioxide vapor! erm... I mean... water... steam.

Yes, hydrogen dioxide sounds interesting but water isn't.

This is a common method used by scammers and confidence/con artists to create an illusion of interest or unfathomable complexity where none actually exists. This is then used to form a fallacious argument from authority: who other than the wizard/trickster spinning the web of nonsense could understand that nonsense or further the web?

https://en.wikipedia.org/wiki/List_of_confidence_tricks

Example: https://en.wikipedia.org/wiki/Theranos
Actual idea: portable blood sampling and testing using minimal amounts of blood.

Sure, that sounds simple, I have a great idea too. So here it is: we create this "teleportation field" where you can transmit candied coconuts back in time by a fraction of a second, but where they come out (the same place) there are twice as many. So you basically toss in one coconut cake and instantly end up at the center of a black-hole that includes all matter in the entirety of the universe at infinite pressure.

How do we generate this field? Well that is all very, very simple my good man! I have top people working on perfecting our prototype device at this very moment. TOP PEOPLE.

So you see how that works...

Image
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

hydrogen dioxide
Actually, water would be dihydrogen oxide, but I see your point. That's basically how academia works, actually: Rename something, then charge money for students to sit in a class in hopes of understanding the neologistic jargon. For example, 'narrative literature' really just means stories, 'media' are communication technologies (and "the medium is the message" was never scientifically validated to begin with, nor is 'media studies' actually communications engineering), etc.

But actually, giving a concept a name gives you a place to store it in your memory bank since language is such a large part of the brain. 'X-mod' is a little simpler than 'modulating any signal's parameter from any signal,' surely. Less is more, we can agree.

I should have a further look at the mod matrices that exist and see whether any are capable of all of the many possibilities (the answer is possibly yes, I admit). Obviously a purely modular environment like Reaktor could do any of this, or anything else really.

I'd also notice that you had no negative comments to make about romtables nor a multiband/MS predelay effect, thus I'd conversely conclude (or perhaps confirm) that it is your opinion that these are worthwhile ideas?

Post

My comments there were that they're useless ideas and don't exist in common synthesizers for that reason: they aren't used to produce any popular timbres.

You can set up modular synthesizers to do such things, but the problem is that the resulting sounds aren't notable. So there is no practical musical use (or otherwise) that would make such techniques worthwhile.

That isn't to say that it would be a waste of time to build an instrument specifically designed to utilize such techniques, just that I can't see it being a "breakthrough" instrument rather than an academic exercise in esoteric synthesis techniques and methods.

re: water, yes I used the wrong term, the "proper" name is hydrogen hydroxide rather than dihydrogen oxide although technically these are all nonsense terms that are unrecognized in chemistry because they aren't formed according to the usual rules for names.

Actually then hydrogen dioxide is slightly more interesting but quickly breaks down into water + oxygen at usual (>200k?) temperatures anyway, so isn't much different.

In any case stereo modulated delays are very common and another name where they are modulated by reciprocal is "comb filter".
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

this is a fantastic little collection for the aspiring sociopath. danke. first hustle i make i'll have to remember to send a blank check your way in gratitude

Post Reply

Return to “DSP and Plugin Development”