Simple questions on synth tech basics...

DSP, Plugin and Host development discussion.
Locked New Topic
RELATED
PRODUCTS

Post

If that's what those curves refer to, it would be useful I suppose. The more notes there are between two keys, the slower portamento should be.

Post

That's how linear works.

In my implementation I align the time between two octaves so it is the same for both log and linear modes.

I also provide log/linear in hz, while normally only v/oct (expo) is provided. In this mode the time is calculated in a slightly more complex way.

The result though is that in linear mode the time depends upon how far the notes are apart. If you set 250ms, C4 to C5 will take exactly 250ms, but C4 to C6 will take 500ms and C4 to D4 will take 41+2/3ms.

In log mode, moving between any two notes will always take 250ms to reach within 1/100th of the destination value. More often you'll see 90% or 1/10th used, but I prefer 1/100th. So the result of that is the glide times in Xhip for log mode are faster than you might get with the same "250ms" in a 90% based measurement.

The reason for this is that given the log/linear alignment to 1 octave timing, 1/10th doesn't seem to perceptibly match the time for linear while 1/100th matches far more closely in my opinion.

On this topic actually I can link a clip of my analog synthesizer which contains the same circuit implemented electronically rather than as software :)

https://soundcloud.com/aciddose/linear- ... ato-and-fm
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Sounds funny, almost like vocoder stuff :)

I have not tried your synth (recently, I did like two years ago, but don't remember any details), but I suppose I would not like the log mode, either. I was asking because having (almost) the same absolute time between any two notes leads to bumps, almost inaudible pitch changes, and similar issues, depending on the set time and the note interval.

Post

I definitely like linear a lot more than log in the majority of cases, however most synthesizers implement only log.

Log = a low-pass filter.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

This might sound like a strange question, but why don't plugins have a mono (as in non-stereo) switch? For bass sounds mono is often better, but I have to run the plugin through a stereo-to-mono effect. Even if three are pan knobs, setting them to 12 o'clock doesn't seem to result in mono sound. Or am I missing something, which wouldn't be the first time? :hihi:

Post

I have no idea. Xhip for example is entirely mono, the panning control is purely a pan law applied to adjust the level of the left/right channel.

So in other words:

left = pan_left(synthesizer)
right = pan_right(synthesizer)

The synthesizer signal itself is entirely mono at this point however.

Most synthesizers are mono. The Junos for example, every classic monophonic synthesizer I'm aware of (how could a single voice be stereo?) and when they have stereo outputs as in the Juno, these are output of the stereo chorus effect for example which takes a mono input. By disabling the chorus in the Juno synthesizers the output should be identical on both channels.

You'll often find that voices are actually pairs of voices in some software synthesizers. So there are twice as many oscillators, filters, amplifiers and so on and the pairs are panned L/R for example. Xhip does this with unison voices = 2 and "stereo" enabled in the alpha version, although the voices are entirely distinct. In some software synthesizers the voices will be linked with the same parameters and modulation while only the phases are different between them.

(In that sort of stereo-voice synthesizer, it is impossible to get mono output once the voices are mixed, unless they are hard-panned L/R, in which case you can simply take one of the outputs and drop the other.)

Now for most plugins? Most plugins are absolutely insane. Back when I first wrote Xhip the reason I wrote it was because I couldn't stand the insanity of the way plugins were set up. I wanted to set an example of the "proper" way to implement basic features of a synthesizer. For example the Juno synthesizers are a great match to Xhip in terms of most features, modulation routing and functionality.

Unfortunately even today most synthesizer plugins are implemented by people without much experience with these instruments and with little understanding of the design decisions made. The decisions made today are quite different as the influences today are different than they have been in the past. This is one potential explanation for why we end up with the strange implementations we see in some plugins.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Isn't a simple level gradient already enough to create a basic stereo output?
I mean, with synths that have unison and a spread knob, doesn't that mean that the, say, 8 voices are identical, but simply pinned to fixed or even changing positions in the panorama? Or does the synth also add slight delays, which I take it is another way to locate a sound source in space?

I was only wondering because I uninstalled Element when I sold it and noticed again that it said "Element Stereo" in the plugin folder. But it sounded as little or much stereo as all the other synths I have, frankly.

Post

No, identical voices would mix to an identical mono result. The panning would only change the level of the left/right sides.

The phases must be different including all oscillator phases and filters, noise, etc.

For example if in Xhip you use the 2-voice unison and stereo (I'd like to change this to a width knob) but at the same time you sync all the oscillators, filter, LFOs and envelopes, the result will be identical to as if you'd used a single mono voice at twice the level.

Using delays or other methods of "mono to stereo" would be considered a "stereo expander" effect. Generally these work by converting to mid/side, delaying and filtering the side channel and scaling the balance between mid / mid+side / side for 0% to 100% to 200% "width".

That's only a basic description of the effect though, there are a whole load of different implementations of these out there.

Yes, it is possible a stereo version of a plugin would simply apply this effect to a mono output, but that would be stupid as you could also just apply the same effect after the synth plugin in a far more flexible way. So I doubt any plugin would do this.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Also related to stereo, I sometimes wonder why synth plugins sound so weak on the headphones. I noticed that high levels of stereo spread seem to destroy sounds, especially on the headphones. I have a pad that sounds nice on the speakers, but on the headphones it sounds ugly, not like one pad anymore, but individual simultaneous voices. The glue is gone so to speak, which also destroys most room effects.

Generally speaking, I wonder why synths sound so different from samples. What sounds like a punchy synth bass on the speakers turns into a fuzzy nasal something on the headphones, the bottom and dynamics seem gone. When I listen to a bass on the M1, however, it sounds so good and punchy even on the headphones.
What sounds good on the headphones also sounds good on the speakers, but not the other way round.

I guess I will kick all soft synths, they are a waste of time :P Maybe it is different with hardware, dunno. But that's not an option at the moment...

Post

The thing is, with speakers you are never listening to the raw synth sound as it was produced by the synth. You are listening to the sound field as it exists in your listening room, which includes some natural channel cross-talk and room reverb. Even acoustic instruments can sound quite horrible if you have a perfectly dry sample (eg. recorded in an anechoic chamber).

So on head-phones, you probably always want to add a tiny bit of stereo reverb to everything (not necessarily at obviously audible levels, it can be very low level and still work wonders), to give the ears some "reference space" and you might also want to avoid completely hard left/right panning (ie. narrow down the stereo field just a notch, if a synth likes to pan stuff hard). Unfortunately not all algo verbs work very well for this, in my experience some of them can even make the situation worse (ie. some algo verbs sound great on speakers, but not so great on headphones; fortunately there are plenty that will do the trick, or you could always use convolution).

Post

A common technique is actually to use a headphones stereo mixer (a "meier crossfeed") which basically mixes the left/right signals by frequency so certain frequency ranges are more blended and certain more isolated.

http://www.meier-audio.homepage.t-onlin ... ssfeed.htm
(Note this technique and similar ones have existed long before, so the name "meier crossfeed" isn't really accurately attributing.)

A large number of headphone amplifiers have such a circuit built in. Without it, ordinary stereo content at ordinary levels will sound radically different on headphones than in a room on monitors/speakers. It sounds like your headphone amp may not include one.

The filter is supposed to approximate the blending effect you get in a normal sized
room.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Aciddose, do you know whether some of the "headphone" destination settings in computer audio driver software, intentionally apply similar matrixing? And if so, wonder how successful it may be?

I rarely listen on headphones for recent years. About the only time is occasionally listening on etymotics er-4 to try to better detect possible low-level audio defects when programming audio, and haven't lately done much audio programming.

I got in the habit of disabling all psycho-acoustic processing in computer audio driver software, because it seems to always make the audio worse to my ear, but maybe for routine headphone listeners, selecting "headphones" in the driver destination menu would actually be of benefit?

Post

The headphone settings almost certainly apply a similar filter/blend effect to the output.

Unfortunately there are so many variations I'd never be able to tell you exactly what they do, and it is difficult to measure even given the devices themselves. Sometimes you're lucky enough that they'll publish what they do in the manual and sometimes not.

Yes, while you're listening to anything other than technical material the filter will almost certainly have a beneficial effect. As he describes on the webpage I linked, the localization of the sounds positioned too far left/right will be improved a ton even though the stereo image is actually reduced in width. This is because of the fact our brain doesn't measure the difference in amplitude, but measures instead the frequency dependent delay time.

So, having 100% width is detrimental to stereo image on headphones and I'd also say in a mix in general. The sound will not be localized "left" or "right", it will be localized directly to the speaker it comes from.

Using some VST plugins which implement this or similar filters is a great idea if you're having any trouble with wide stereo mixes.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Headphones will sound approximately 6dB quieter in the bass, as with speakers, both ears receive each speaker's bass output.
A stereo crossfeed of bass frequencies will make up for this.
There is a very simple way to achieve a crossfeed like this (simple) in your DAW. Simply route your signal to another channel in parallel of the original, swap left and right channels, and apply a lowpass filter somewhere around 500hz. This is very simplified, but it is the major portion of the difference between headphones and speakers.

Post

The LP filter you mention is interesting as I noticed that the lower the cutoff frequency of the synth bass patch, the more solid the bass sounds. The best results I obtained were for a saw/pulse with cutoff so low it almost sounds like a sine. Maybe the volume ratio between low and high frequency content is screwed on headphones because the power is simply missing with those tiny membranes...
On speakers the higher frequencies don't hurt the low frequencies, they just add character.

Locked

Return to “DSP and Plugin Development”