I've been trying to find the answer to this question be cannot. Lets say I have a stereo file but the track is set to mono. Do audio units handle the summing of the stereo tack to a single buffer of audio data?
So in my process function I can always assume that inBuffer.mNumberBuffers == outBuffer.mNumberBuffers if interleaved or not? What about in the case of a surround file to stereo or mono? If someone either has the answer, and/or can point to some documentation about this that'd be great.
I don't know specificaly how it is with audio units. But effects plugins can (or should be able to) implement any combination of input/output channel counts: mono --> stereo, mono --> surround, stereo --> mono. Most often you'd want mono --> mono and stereo --> stereo. But ofcourse this depends on what your effect actually does. I would not depend on the host doing summing to mono. If you want mono output, then only provide a mono output. Simple as that me thinks....
BTW: in the context of effects plugins it's better to name it a "channel" rather than a "file". You don't know where it comes from... Could be a synth providing the audio, not from a file as such.
We are the KVR collective. Resistance is futile. You will be assimilated. My MusicCalc is back online!!