Page 1 of 1

PostPosted: Sun Jan 13, 2008 12:47 pm
by runagate
Is there a way to use phase correlation detection to spectrally/multiband twiddle the phase of things and lock them closer to the same phase with a plug-in?

PostPosted: Mon Jan 14, 2008 6:57 pm
by MaxC
Bumping because I have wondered this as well.

PostPosted: Mon Jan 14, 2008 7:17 pm
by ouroboros
phasebug will allow you to lock left and right, but no detection and no spectral options.
there's always calculating frequency and applying sample delay to one side in the channel you've eq'd apart...

sorry, I'm no help. (Actually I thought that the roger nichols suite might do this as well.

PostPosted: Tue Jan 15, 2008 6:49 am
by TheoM


along with the blue cat widening meters and multi phase scope?

PostPosted: Tue Jan 15, 2008 2:23 pm
by runagate
ttoz wrote:


along with the blue cat widening meters and multi phase scope?

Good call, ttoz, though my interest in it is, not surprisingly, how I can spectrally tear apart a mix but I just phrased it in such a way that it'd help others with an actual problem.

Hmm. I bet I can make a spectral multiband routing in Usine and just put phasebugs on the channels and assign the controls to something...

PostPosted: Wed Jan 16, 2008 7:11 am
by sophonomoth
runagate wrote:Is there a way to use phase correlation detection to spectrally/multiband twiddle the phase of things and lock them closer to the same phase with a plug-in?

Adobe Audition 3!


PostPosted: Wed Jan 16, 2008 8:00 am
by pianistUK
Audio phase response with respect to frequency is the first derivative of amplitude response with respect to frequency.
Thus, for a fairly simple audio transer function like a high pass filter or a bell shaped EQ boost or cut, etc. you can sketch a simple graph of the EQ shape and then sketch the gradient (rate of change) of the amplitue curve with respect to frequency to draw the phase response graph of that filter.

If you want to compare a complex audio signal with a filtered version of that same audio signal, you can measure the phase response by using a nice software analyzer like SpectraPlus from Pioneer Hill Software.

You can download a free 30 day demo version, and it can monitor live audio input in realtime, or look at existing WAV files and graph the phase difference between them (ie. left channel versus right channel for a stereo file) as well as the frequency response and other things. It's really great!

You can manipulate the phase response of a WAV file using any audio editor that can do basic EQ processing and has the ability to reverse the file.
If you appply EQ boost, then reverse file, then apply opposite EQ cut by same amount, then reverse file back right way round again. Net result - overall unchanged flat amplitude response, but net phase response changed, ie. you've applied an allpass filter.
If you do EQ, then reverse, then same EQ again, then reverse, net result - phase linear EQ, ie. no net phase shift (cancelled out forwards and backwards phase shifts) but double strength EQ boost overall (so you'll only need approx half the EQ boost or cut, since it's applied it twice over.)

PostPosted: Wed Jan 16, 2008 3:28 pm
by runagate
Aside from not being able to google up a context-specific definition of "transer" as you use it I thought that was a very informative post.

Gets my evil little mind to working, it does.

It seems odd to me that there's no DSP process by which you can "phase lock" different materials all being mixed together. Given my proliclivity for purposefully altering the phase of every available sound it'd be of no use to me, but I can certainly see an application for, say, mixing a multitrack recording of a drum kit.

PostPosted: Wed Jan 16, 2008 5:28 pm
by Funkybot's Evil Twin
runagate wrote:but I can certainly see an application for, say, mixing a multitrack recording of a drum kit.

With a matched pair of overheads, locking the phase will greatly improve the sound (I'll manually do it at the sample level). But a problem arises when you try and "phase lock" two different kinds of mics. When you try that, you'll discover that getting the low end in phase might knock the mids or highs out of phase. Due to the different frequency responses you can never get it right. Anyone who has ever thrown two different mics on a guitar cab and tried blending them together has experienced this to some extent. Your best bet in that situation is to accept that the phase is going to be off, and just make sure you don't hear any obvious comb filtering and everything is still mono compatible.

PostPosted: Thu Jan 17, 2008 8:06 am
by pianistUK
Hi Runagate,

Sorry for the typo! That word should be "transfer" as in "transfer function" meaning whatever function is needed (ie. processes like EQ, filtering, phase shifting, compression, limiting, etc.) to transform the input signal into the output signal.

I see you had similar problems typing the word "proclivity" but never mind!

The forwards and backwards EQ'ing of a signal allows you to separate the amplitude gains from the phase shifts depending on what order you do the processing in. In a program like the excellent Adobe Audition mentioned above (which was originally called Syntrillium Cool Edit Pro) you can write a script to do these EQ / reverse / EQ / reverse sequences quickly and consistantly.

I forgot to say this assumes you are using normal IIR (infinite impulse response) filters for the parametric EQ or whatever. This would be the same as traditional analogue EQ. To be honest, you could do this type of phase shifting with any recording device that can play a signal backwards, using outboard analogue equalizers, however you'd have all kinds of other phase shifts (and transfer functions) in the re-recording process to tape or digital via A/D and D/A converters anyway, so that method would be very apporoximate.

The beauty of digital editing is trying something out and saving as a new WAV file to directly compare it against the unaltered original.
Many software EQ these days offers FIR (finite impulse response) digital filtering which can offer phase linear EQ processing directly, so to speak. I have found FIR filters to be more noisy due to rounding-off in the maths, but they are quite useful. Something like Adobe Audition's Phase Equalizer pictured above works by using FIR filters, I think (I don't have this software myself.)

A further point is not to confuse talking about "Phase" when you really mean "Polarity" because people often say "out of phase" to mean a signal is inverted to the opposite polarity. The signal would still be in perfect phase with the opposite polarity signal at all frequencies!

A phase correlation meter basically only tells you if the two channels sum together to be exactly identical mono = +1, uniquely different stereo = 0, or exactly opposite polarity invereted mono = -1 so that doesn't tell you anything much about their frequency content, except that if the overall sum reads somewhere between 0 and -1 then the signals must be cancelling each other out at some frequencies if not all, but you need a better measurement tool to find out where, such as SpectraPlus.

If you've got two different mics recording one source (like a vocal) maybe at different distances, you will get slight time delay between the mics which is the different arrival times of the overall sound pressure nodes and antinodes from the singer's voice plus the room acoustics from all different directions, and the microphone's own shape and pickup polar pattern will affect things too. It's very complex. A ribbon mic for instance has figure-8 pattern which means opposite polarity from the rear of the mic, compared to an omnidirectional mic in the sane position!

There is no easy way to match up (correct) the phase non-linearities of two different mic position recordings without taking into account the different EQ curves, distorion and direct-to-reverb ratio of each as well. The best way to avoid these problem is to use matched stereo pairs of mics, in symmetrical layouts, and mics which have very good phase response like Earthworks or DPA, or omni mics in general. Even with two theoretically perfect omni mics in an anechoic chamber, you won't get two phase-identical tracks recorded from different mic positions because the acoustic source (singer / piano / guitar / drumkit / sax / trumpet / violin / loudspeaker cone, etc.) will radiate different frequency spectra in different physical directions, with natural phase shifts and subtle Doppler effect type artifacts because of moving around (eg. signer's head) and physical size / distance (eg. 9 foot long grand piano) causing mixtures of arrival times. Your ears wouldn't be able to pinpoint location without these phase differences, they are part of nature and sound.

The type of correction you are looking for might be azimuth correction (in the old days used for aligning stereo tape recordings where they were worried about mono compatibility) for which some specialist high-end products exist. Have a look at

If you really want to "phase lock" two files together as best as possible, run them as a stereo pair through SpectraPlus and graph the phase vs frequency response, then use a Adobe Audition phase equalizer (if you've got this) to roughly tame the worst errors, then check again with SpectrPlus. Repeating with some trial and error or "bracket testing" several attermpts, you might get what you're after, although I doubt you'll ever be able to get 100% match all the way up to 20kHz. I would apply processing to one mic only and leave the other alone until you've best matched them, and then finally re-apply your transfer function to split the difference between the two mics, so as not to over-process either signal more than you have to. If you do too much all-pass phase shifting on an otherwise okay recording you could really mess up the audio clarity anyway. The initial transient attack is most important with audiophile hi-fi and is very sensitive to phase distortion.

If you can recreate the recording session and lay down some tracks with test signals like sharp loud clicks and steady sine sweeps, you might be able to see what's going on more easily. In the digital domain, square waves make a very good test signal.

Have fun...

PostPosted: Fri Jan 18, 2008 5:27 am
by sophonomoth
PianistUk, no need of spectraplus. Adobe Audition 3 have anything you need in analysis tools.
To my knowledge, it is the only audio editor to features a Nyquist Plot (spin/pivot) mode of its phase analyser. This will also allow you to visualise the group delay fontion of a mono file.

You mentionned Cedar AZ1, be confident that now AA3 will do the same job.

In AA3, You have two methods of phase correction.

*1 a restoration plugin named Automatic phase correction.

*2 a Stereo imagery one named "Graphic panner". This one features a learn mode!
It can correct 3 parameters at once or separatly:

*L/R balance (graphic panner)Panning Method Pans either by raising and lowering amplitude in each channel, or by moving audio across the stereo field.

*Expansion (graphic stereo width)

*graphic Phase correlation.

have fun but beware to artifacts in automatic mode.