As an audio engineer, you might decide to work with two or more microphones in order to generate an alternative tonal option for a source or else you might want to pick up a partial portion of the sound from the source. When you eventually decide to combine the multiple options and to sum them into a bus or a whole mix, you are exposing the sound of your recording to the risk of so called "comb filtering" due to incorrect phase. That means the mix might suffer a lack of bass content, or become "boxy" or "phony," and sound definitely weird. While you won't hear these effects if you spread the sounds across the stereo image as i nthat configuration the signals are only partially summed, as soon as you (or someone else) listen in mono the problems become strikingly relevant.

What's going on? Why does the sound change? Well...the sum of two (or more) signals don't always give a positive result: it can also give a negative result. To be perfectly correct: a null result. Sound waves consist of peaks and dips in pressure: if a given sound wave arrives at mic #2 with such a delay that its "peak" corresponds to a "dip" picked up by mic #1, the net result is zero.

It's common practice to position microphones very close to each other in order to avoid the delay. Or else, one can follow the 1:3 rule: position the second mic at a distance at least three times the distance from the source to the first mic. Is it a matter of time? Well... yes. And no. Not only time, but also...

When dealing with phase, we first must be aware of some facts regarding A) the physical nature of sound and B) the way microphones work. I was not totally aware of these facts until I found myself dealing with phase issues and decided to understand them. Admittedly, this was not an easy task. I found these facts to be somehow hidden within the dedicated literature, inaccurately described or else misrepresented in different ways, and sometimes completely ignored. What follows is the result of a personal research both theoretical and practical in the use of two or more microphones. 

The first fact to be aware of you might already know: phase is frequency-dependant.

There is no such thing as a "general phase problem" pertaining to a combination of signals generated by multiple microphones. For a given combination, a phase problem always occurs at a certain fundamental frequency and at higher, mathematically related, frequencies (depending on the type of microphones used). As previously said, sound waves have peaks and dips of pressure: physical space (distance) between them measures differently for each frequency because each frequency has its own wavelength. In the example above, when a peak at the second microphone corresponds to a dip at the first microphone, the result is a total cancellation of one fundamental frequency (plus higher frequencies). But in any case like this, there will be a different wavelength (i.e. frequency) for which both microphones simultaneously pick up a peak (or a dip), and the result is an increase in volume for that frequency. In other words, any time there is cancellation at one fundamental frequency, there will be a boost at another fundamental frequency.

This is very important to be aware of: phase is always a risk and — at the same time — an opportunity to carve a desired tonal shape using your main recording tools: the microphones. Frequencies are canceled while frequencies are enhanced.

There is a second fact we must be aware of: microphones don't translate sound pressure in the same way. No, I am not talking about different frequency responses. Maybe you know this as well, but are you sure you know the whole story? 

The types of microphones widely used nowadays are condensers, dynamics, and ribbons. 

In a condenser microphone, the transducing element, called a capsule, consists of a supporting ring to which are attached an extremely thin, circular sheet of metal-sputtered plastic material under tension, called a diaphragm, and a thicker, fixed, perforated metallic plate behind that (if you follow the direction of sound wave). (In the majority of condenser capsules in use today, two diaphragms are present, one on each side of the backplate. For the purpose of this explanation we can just consider the front one.)

The two elements, diaphragm and backplate, are separated by an air gap and are polarized by an electrical charge provided, effectively constituting the two arms of a capacitor (condenser is an old term for capacitor). The thin diaphragm involved in the periodic variation of air pressure of the sound wave vibrates together with it. With positive pressure, the diaphragm is pushed against the backplate, the distance between them shortens and so the capacitance of the system is varied. The other way around, the diaphragm is pulled by de-compression; the distance from the backplate increases and the capacitance of the system varies in the opposite verse. Said oscillation of the capacitance is electrically used to generate the variation of voltage at the output of the microphone. Note that the amplitude of the output signal is highest when the diaphragm is at its closest or farthest position from the backplate. Therefore in a condenser microphone the level of the signal is directly proportional to the air pressure: the more pressure (or de-pressure) the more signal.

With dynamic and ribbon mics, transduction of air pressure into electric voltage is accomplished using electromagnetic induction. Similarly to a condenser, also in dynamic microphones a diaphragm is present. The whole system though is heavier — i.e. slower and "less accurate" — because attached to the diaphragm is a coil of thin copper wire. Said coil of wire surrounds a fixed, strong magnet. According to the principles of electromagnetic induction, voltage is induced on the coil when it...

The rest of this article is only available with a Basic or Premium subscription, or by purchasing back issue #106. For an upcoming year's free subscription, and our current issue on PDF...

Or Learn More