Sorry for the typo! That word should be "transfer" as in "transfer function" meaning whatever function is needed (ie. processes like EQ, filtering, phase shifting, compression, limiting, etc.) to transform the input signal into the output signal.
I see you had similar problems typing the word "proclivity" but never mind!
The forwards and backwards EQ'ing of a signal allows you to separate the amplitude gains from the phase shifts depending on what order you do the processing in. In a program like the excellent Adobe Audition mentioned above (which was originally called Syntrillium Cool Edit Pro) you can write a script to do these EQ / reverse / EQ / reverse sequences quickly and consistantly.
I forgot to say this assumes you are using normal IIR (infinite impulse response) filters for the parametric EQ or whatever. This would be the same as traditional analogue EQ. To be honest, you could do this type of phase shifting with any recording device that can play a signal backwards, using outboard analogue equalizers, however you'd have all kinds of other phase shifts (and transfer functions) in the re-recording process to tape or digital via A/D and D/A converters anyway, so that method would be very apporoximate.
The beauty of digital editing is trying something out and saving as a new WAV file to directly compare it against the unaltered original.
Many software EQ these days offers FIR (finite impulse response) digital filtering which can offer phase linear EQ processing directly, so to speak. I have found FIR filters to be more noisy due to rounding-off in the maths, but they are quite useful. Something like Adobe Audition's Phase Equalizer pictured above works by using FIR filters, I think (I don't have this software myself.)
A further point is not to confuse talking about "Phase" when you really mean "Polarity" because people often say "out of phase" to mean a signal is inverted to the opposite polarity. The signal would still be in perfect phase with the opposite polarity signal at all frequencies!
A phase correlation meter basically only tells you if the two channels sum together to be exactly identical mono = +1, uniquely different stereo = 0, or exactly opposite polarity invereted mono = -1 so that doesn't tell you anything much about their frequency content, except that if the overall sum reads somewhere between 0 and -1 then the signals must be cancelling each other out at some frequencies if not all, but you need a better measurement tool to find out where, such as SpectraPlus.
If you've got two different mics recording one source (like a vocal) maybe at different distances, you will get slight time delay between the mics which is the different arrival times of the overall sound pressure nodes and antinodes from the singer's voice plus the room acoustics from all different directions, and the microphone's own shape and pickup polar pattern will affect things too. It's very complex. A ribbon mic for instance has figure-8 pattern which means opposite polarity from the rear of the mic, compared to an omnidirectional mic in the sane position!
There is no easy way to match up (correct) the phase non-linearities of two different mic position recordings without taking into account the different EQ curves, distorion and direct-to-reverb ratio of each as well. The best way to avoid these problem is to use matched stereo pairs of mics, in symmetrical layouts, and mics which have very good phase response like Earthworks or DPA, or omni mics in general. Even with two theoretically perfect omni mics in an anechoic chamber, you won't get two phase-identical tracks recorded from different mic positions because the acoustic source (singer / piano / guitar / drumkit / sax / trumpet / violin / loudspeaker cone, etc.) will radiate different frequency spectra in different physical directions, with natural phase shifts and subtle Doppler effect type artifacts because of moving around (eg. signer's head) and physical size / distance (eg. 9 foot long grand piano) causing mixtures of arrival times. Your ears wouldn't be able to pinpoint location without these phase differences, they are part of nature and sound.
The type of correction you are looking for might be azimuth correction (in the old days used for aligning stereo tape recordings where they were worried about mono compatibility) for which some specialist high-end products exist. Have a look at http://www.cedaraudio.com/intro/az_intro.html
If you really want to "phase lock" two files together as best as possible, run them as a stereo pair through SpectraPlus and graph the phase vs frequency response, then use a Adobe Audition phase equalizer (if you've got this) to roughly tame the worst errors, then check again with SpectrPlus. Repeating with some trial and error or "bracket testing" several attermpts, you might get what you're after, although I doubt you'll ever be able to get 100% match all the way up to 20kHz. I would apply processing to one mic only and leave the other alone until you've best matched them, and then finally re-apply your transfer function to split the difference between the two mics, so as not to over-process either signal more than you have to. If you do too much all-pass phase shifting on an otherwise okay recording you could really mess up the audio clarity anyway. The initial transient attack is most important with audiophile hi-fi and is very sensitive to phase distortion.
If you can recreate the recording session and lay down some tracks with test signals like sharp loud clicks and steady sine sweeps, you might be able to see what's going on more easily. In the digital domain, square waves make a very good test signal.