Pan law contradiction(?)

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

This seems like the most intelligent section of the forum, so I’d prefer to ask even tangentially related questions here, unless it causes an uproar.

The following passage is on one of Ableton’s online fact sheets:

“Live uses constant power panning with sinusoidal gain curves. Output is 0 dB at the center position and signals panned fully left or right will be increased by +3 dB. In order to minimize this volume change, it may be helpful to narrow the overall stereo width before doing extreme panning. This can be done via the Width control in the Utility device.”

I’m having difficulty reconciling why it mentions a suggestion for minimizing volume changes, when constant power pan laws are designed to maintain volume as the pan changes. Is there something I’m missing?

Post

The "constant-volume" property of constant-power panning only really works under assumptions. First of all it assumes a mono signal (that's why it's a pan law, not a balance law). And also it really only works for content over ~700Hz, for lower stuff you'll probably want something else. There's some dependence on whether you're listening on headphones or speakers as well, if my intuition serves me well. These pan laws are basically a compromise that kinda works and does no harm otherwise.

In the stereo case, it's basically gaining the left and right channel individually, and how it affects the volume depends even more on the signal, mainly on how left and right are correlated. There are also some mono-compatibility pitfalls here. It gets complicated quickly.

By reducing stereo width before panning/balancing, the signal gets closer to a mono source, which also brings it closer to the ideal assumption. The suggestion is reasonable, because if one is doing extreme panning, one is obviously not interested in keeping the stereo width of the source. A hard panned signal is always narrow.

Post

hugoderwolf wrote: Tue Nov 22, 2022 4:12 pm The "constant-volume" property of constant-power panning only really works under assumptions. First of all it assumes a mono signal (that's why it's a pan law, not a balance law). And also it really only works for content over ~700Hz, for lower stuff you'll probably want something else. There's some dependence on whether you're listening on headphones or speakers as well, if my intuition serves me well. These pan laws are basically a compromise that kinda works and does no harm otherwise.

In the stereo case, it's basically gaining the left and right channel individually, and how it affects the volume depends even more on the signal, mainly on how left and right are correlated. There are also some mono-compatibility pitfalls here. It gets complicated quickly.

By reducing stereo width before panning/balancing, the signal gets closer to a mono source, which also brings it closer to the ideal assumption. The suggestion is reasonable, because if one is doing extreme panning, one is obviously not interested in keeping the stereo width of the source. A hard panned signal is always narrow.
Hello, thanks so much for your detailed reply! That makes sense to me; I know that perceived loudness is a function of frequency, so I suppose that’s likely why low frequencies aren’t affected as much? The correlation for stereo signals makes sense too- for if there’s “—1” correlation or anti-phase issues, obviously there would be frequency dropouts or (in simple waveform cases) outright cancellation.

Post

-1 correlation will tend towards 0 with panning because one channel will be reduced. The side signal could make it into mono further down the chain then.

Post

losangeles wrote: Tue Nov 22, 2022 4:31 pmI know that perceived loudness is a function of frequency, so I suppose that’s likely why low frequencies aren’t affected as much?
It's not so much because of loudness perception itself, but about how ears work in an actual sound field. Low-frequency stuff from a left or right speaker arrives at both ears in a correlated way, because the wavelength is large with respect to the ear distance. And a mono signal on both speakers adds up by +6dB. That would suggest a +6 dB pan law for constant loudness (constant amplitude rather than constant power).

Above a kind of transition region between about 700-1400 Hz wavelengths are small enough so the signal from a single left or right speaker become uncorrelated at the two ears. That's why constant-power panning works better for this.

You can try it out by panning a really broadband source (like a full mix in mono) hard left right and listening to how the spectral balance shifts. In the hard left/right positions, the low end basically lacks 3 dB in level compared to the mids and highs. You can compensate that with a low shelf.

Strictly speaking, pan knobs could be better at what they do. But on the other hand, most engineers probably don't want a hidden EQ in their pan knob.

Post

camsr wrote: Tue Nov 22, 2022 4:43 pm -1 correlation will tend towards 0 with panning because one channel will be reduced. The side signal could make it into mono further down the chain then.

I think I may have confused myself above. If the signal is true stereo but has a -1 correlation, why would panning it tend it toward zero? I think above I was mistaking that it became summed to mono, in which case full cancellation would occur- but that’s not true for panning, only the restive levels change. By simply panning a true stereo signal hard L or R, wouldn’t just the respective channel’s original signal pass through (at +3dB, or whatever pan law is employed)?

Post

hugoderwolf wrote: Tue Nov 22, 2022 4:51 pm
losangeles wrote: Tue Nov 22, 2022 4:31 pmI know that perceived loudness is a function of frequency, so I suppose that’s likely why low frequencies aren’t affected as much?
It's not so much because of loudness perception itself, but about how ears work in an actual sound field. Low-frequency stuff from a left or right speaker arrives at both ears in a correlated way, because the wavelength is large with respect to the ear distance. And a mono signal on both speakers adds up by +6dB. That would suggest a +6 dB pan law for constant loudness (constant amplitude rather than constant power).

Above a kind of transition region between about 700-1400 Hz wavelengths are small enough so the signal from a single left or right speaker become uncorrelated at the two ears. That's why constant-power panning works better for this.

You can try it out by panning a really broadband source (like a full mix in mono) hard left right and listening to how the spectral balance shifts. In the hard left/right positions, the low end basically lacks 3 dB in level compared to the mids and highs. You can compensate that with a low shelf.

Strictly speaking, pan knobs could be better at what they do. But on the other hand, most engineers probably don't want a hidden EQ in their pan knob.
Okay, I see now. It’s the ratio of wavelength to distance traveled between reaching two ears that’s responsible (low frequencies’ wavelengths are much longer than that scale, so are largely inconsequential for noticeable phase issues).

The only remaining question I have is the line where you write “ That would suggest a +6 dB pan law for constant loudness (constant amplitude rather than constant power).” I see where the +6dB arises: 10* log[(2A)^2/A] = ~6dB, but I wasn’t aware that constant amplitude results in constant loudness (I thought that was what constant power was closest to).

Post

losangeles wrote: Tue Nov 22, 2022 4:55 pm By simply panning a true stereo signal hard L or R, wouldn’t just the respective channel’s original signal pass through (at +3dB, or whatever pan law is employed)?
Yes, however if it's a mix pan (here meaning not a hard pan, some L or R still remains on the L or R) stereo correlation remains a variable and what I said above holds true. I always approach mixing stereo as mixing 2 discrete channels, whether the said audio signal between the 2 channels is stereo correlated (true stereo) or not (different sounds hard panned).

Post

The +3dB thing comes from mixing noise together... where if you attempt to downmix 2 different (uncorrelated) noise signals together into 1 channel, they tend to raise in power by +3dB. IMO this is only useful with 2 sounds that are fairly similar but generally uncorrelated. I have always used panning that is either +6dB full tilt or 0dB.

Post

camsr wrote: Tue Nov 22, 2022 5:07 pm
losangeles wrote: Tue Nov 22, 2022 4:55 pm By simply panning a true stereo signal hard L or R, wouldn’t just the respective channel’s original signal pass through (at +3dB, or whatever pan law is employed)?
Yes, however if it's a mix pan (here meaning not a hard pan, some L or R still remains on the L or R) stereo correlation remains a variable and what I said above holds true. I always approach mixing stereo as mixing 2 discrete channels, whether the said audio signal between the 2 channels is stereo correlated (true stereo) or not (different sounds hard panned).
Ok, I Can see that (if the pan isn’t a hard pan in either direction and the two channels share some common signal). Perhaps I’m somewhat confused as to the nature of mono/stereo in its own right? I know a mono source is one such as a single mic, that produces a single waveform. On my analog mixer each track has one channel, also. But in a DAW, all channels are stereo, correct? And “mono” refers in this context to dual mono?

Post

losangeles wrote: Tue Nov 22, 2022 5:04 pm Okay, I see now. It’s the ratio of wavelength to distance traveled between reaching two ears that’s responsible (low frequencies’ wavelengths are much longer than that scale, so are largely inconsequential for noticeable phase issues).
Actually at lower frequencies (ie. below about 1kHz) the brain relies on the interaural phase difference to locate the sound source (intensity is not necessarily that reliable as low frequency waves can pass-thru and/or diffract around the head), where as at high frequencies (ie. above about 1kHz) the phase relationship becomes more or less random and interaural intensity difference is used instead.

For the same reason (the phase relationships at high frequencies are essentially random) the best estimate for constant loudness is to assume they might just as well be uncorrellated, which gives us the 3dB constant power pan law for the range of frequencies where interaural intensity difference is the primary localization cue.

At very low frequencies (say below 100Hz or something) where the wavelengths are so long that the phase differences are too small for the brain to reliably estimate the localization of sound sources is typically very poor. This is why monophonic subwoofers are a thing.

Post

mystran wrote: Tue Nov 22, 2022 6:51 pm
losangeles wrote: Tue Nov 22, 2022 5:04 pm Okay, I see now. It’s the ratio of wavelength to distance traveled between reaching two ears that’s responsible (low frequencies’ wavelengths are much longer than that scale, so are largely inconsequential for noticeable phase issues).
Actually at lower frequencies (ie. below about 1kHz) the brain relies on the interaural phase difference to locate the sound source (intensity is not necessarily that reliable as low frequency waves can pass-thru and/or diffract around the head), where as at high frequencies (ie. above about 1kHz) the phase relationship becomes more or less random and interaural intensity difference is used instead.

For the same reason (the phase relationships at high frequencies are essentially random) the best estimate for constant loudness is to assume they might just as well be uncorrellated, which gives us the 3dB constant power pan law for the range of frequencies where interaural intensity difference is the primary localization cue.

At very low frequencies (say below 100Hz or something) where the wavelengths are so long that the phase differences are too small for the brain to reliably estimate the localization of sound sources is typically very poor. This is why monophonic subwoofers are a thing.
When you say “interaural phase difference,” do you mean ‘beating?’

EDIT* sorry, disregard this, I didn’t know it was a well-defined term. Thank you!

Post

You are right, it is terribly written ;)

Post

First, thanks to all for your input, it truly helped so much. Here’s a brief report outlining what I (hopefully) picked up here:

Mono Source-
Low freq: wavelengths are long relative to inter-ear distance, so phasing is negligible. As such, amplitudes from both speakers add to double; if constant power is desired in such a case to control for volume, a +/- 6dB pan law is required.
High freq: the wavelengths in this case are small relative to inter-ear distance; this means signals arriving at each ear from both speakers can be treated as largely uncorrelated. Here, we can no longer predict how amplitudes will, in general, compose; however, if we decide to adjust amplitude such that constant power is achieved, we can set the amplitude gain across the pan to follow sin/cos for the respective channels; this way, the square of the amplitudes will be the same regardless of the uncorrelated phase values.

I’m thinking about the stereo cases now, but does that sound agreeable?

Post

losangeles wrote: Tue Nov 22, 2022 6:55 pm When you say “interaural phase difference,” do you mean ‘beating?’
Interaural phase difference is the technical term. It's really the difference in arrival time that the brain wants, but unless there's some clear transient event, we need to estimate this from the phase difference.

If we assume the ear to ear distance is about 20 cm and the sound propagation path goes around a spherical head, then the maximum path length difference is approximately 10cm*(1+pi/2) or 25cm which gives a maximum time difference of about 0.7ms. To fully disambiguate we require that the wavelength is twice this much, which gives us a maximum frequency of about 700Hz, but if we can correctly guess which signal arrived first (say from intensity difference) then one wavelength is enough and we get a maximum frequency of about 1400Hz. Beyond that there might be multiple cycles and we can't really tell how many.

When I was writing this post, I just Googled "ear to ear distance" and calculated the rest, but it's probably not a pure coincidence that my estimate arrived at roughly the same 700-1400Hz range that hugoderwolf gave earlier. :)

Post Reply

Return to “DSP and Plugin Development”