MIXING in the Round
May 1, 2001
A wealth of choices can be a blessing and a curse.
Wouldn’t you know it? Just when recording engineers havemixing in stereo down cold, 5.1-channel surround sound comes along.Of course, movie soundtracks have been using surround sound foryears, but the basic formula is pretty straightforward: dialog inthe center, music in the front left and right, suround effects in the rear, low-frequency effects (explosions, earthquakes, and soforth) in the subwoofer.
Now audio-only music recordings are being mixed in 5.1 surround,and the old rules have been thrown out the window. Where do you place the listener with respect to the performer — in the“audience” or in the midst of the ensemble? With five main channels surrounding the listening position, the number of mixing decisions has increased substantially compared with stereo,and engineers are only starting to comprehend the enormity of the task.
I like to think of 5.1 surround mixing as similar to using different camera angles for shooting a movie. Sometimes up close and personal is the right approach, but other situations call for awide panoramic shot. In any event, you need to understand surround-mixing technology before you jump in head first; if you are unfamiliar with this technology, see “You’re Surrounded” in the October 2000 issue of EM.
SOUND FIELD OF DREAMS
Consider a typical stereo sound system with two speakers placed in front of a listener centered between them. The space between the speakers is called the stereo sound field, and individual sounds in a mix can be placed at any location within this space.Two basic principles of psycho-acoustics let engineers do this: relative level and inter-ear time delay.
Relative level is the way in which a sound’s volume at each ear helps determine its source’s location. In a stereo mix, each input channel’s pan pot determines the relative level of the corresponding signal in the right and left speakers, and the main fader controls the signal’s overall level (see Fig. 1). Ifthe pan pot is centered, the signal’s level is equal in both speakers, and the listener’s brain is fooled into believing thatthe sound is coming from the point halfway between them. It’s as though there were another speaker at that location; in fact, thisvirtual speaker is often called a phantom center. If you move the pan pot to the left, the signal’s level is greater in the left speaker, and the apparent sound source moves to the left of center. Move the pan pot to the right, and the apparent sound source shifts to the right because that sound’s level is greater in the right speaker.
The other psycho-acoustic principle, inter-ear time delay, helps you localize a sound source according to the difference between the instants at which a sound arrives at each ear. For example, if a sound source is to your left, the sound arrives at the left ear before it arrives at the right.
This principle is best simulated in a stereo recording by usinga pair of microphones to pick up an entire ensemble, rather than combining multiple tracks through a mixer (see Fig. 2). If the microphones are placed too far apart, you lose the inter-ear effect. An Office de Radiodiffusion-Television Française (ORTF) configuration works quite well: simply place two cardioid mics at an angle of about 110 degrees with the capsules roughly seven inches apart (the average ear-to-ear distance on a human head).
With this technique, what you hear is what you get. You can adjust each instrument’s volume by moving the musicians nearer toor farther from the mics, and you can change each instrument’s stereo placement by moving the musicians to the left or right infront of the mics.
Unfortunately, listening to such a recording’s playback on speakers can result in speaker crosstalk, which occurs when the left speaker’s sound reaches the right ear and vice versa.This can obscure the inter=ear effect in the recording, but separating the mics by 10 to 12 inches reduces the problem.Listening on headphones eliminates the problem altogether.
This procedure has been employed on some of the finest orchestral and acoustic recordings. It’s more difficult to do it well because you must think about the mix from the very beginning of the recording, and many engineers and musicians don’t want to give up the luxury of fixing it in the mix with punch-ins and pitch correction. That’s a pity; this technique can make a beautiful stereo sound field that simply can’t be duplicated with separate tracks and pan pots. All you need to record in this manner is a nice pair of microphones, a great room, and a stereo recorder.
The same two principles can be applied to 5.1-channel surround recordings, which are played back with a surround-speaker systemthat includes front left, center, and right speakers; left and right surround speakers; and one or more subwoofers, all arrayed around the listening position. You can start with a multi-track master and send all the tracks through a surround mixer. Butinstead of a simple right-left pan pot, each input channel includes a surround-panning control, which functions like a joystick (see Fig. 3). Such a mixer might be a hardware device, or itmight be implemented in software that runs with a multi-track digital-audio program.
If you want an instrument or voice to sound as though it’s coming from a particular speaker, simply grab the panning control and pan the sound to that speaker. What’s more, you can adjust eachtrack’s apparent location anywhere between the speakers by moving the panning control to any available position. For instance, you can spread the drum kit across the three front speakers and placethe guitar anywhere you like in the front or back. It’s just like stereo mixing, but now you can decide whether you want to place the listener in front of the band, in the middle of the stage, or in some other strange place.
You can also record ensembles with a surround-microphone array,which is just an extension of a stereo array. However, you have to choose where to place the instruments in the surround sound field,which determines the listener’s perspective. For example, you can position the ensemble in front of the array, using the rear microphones to pick up the room’s ambience (see Fig. 4a),which puts the listener at the conductor’s position. Alternatively,you can place the array in the center of the ensemble, thereby putting the listener within the group (see Fig. 4b). Ineither case, the sound field in the surround speakers is particularly effective because these speakers are frequently positioned to the sides of and just behind the listener, sort of like oversize headphones.
All 5.1-channel surround systems include a physicalcenter-channel speaker that offers yet another choice: do you put the center-channel information in the physical center speaker, thephantom center (that is, equal volumes in the front right and left speakers), or both? Many joystick panners have a width or focus control that determines the proportion of a track that is routed to the physical center and phantom center (see Fig. 5). For instance, you can set a joystick panner so that a center pan puts the track exclusively into the physical center speaker, equally inthe left and right speakers, or a combination of both in any degree.
Here is a center-channel goof that even some famous engineers have made: if you place the dry lead vocal in only the center speaker and pan the return reverb from that track to the left andright speakers, you have a potentially embarrassing situation. If a consumer turns on only the center speaker and thus solos the lead vocalist without reverb or delay support, it can sound pretty bad. This can even happen if the listener stands next to the center speaker.
Few singers sound great perfectly dry (without reverb or delay),and there’s the potential to hear all sorts of sniffs, grunts, and other lip noises, which are not flattering. That’s why anything panned to the center speaker should have a small amount of reverb and delay. It doesn’t need to be as heavy as the returns to the left and right speakers, but it should be there nonetheless.
Furthermore, it’s generally a bad idea to put your lead vocalist exclusively in the center speaker. Some home-theater playback systems rely on the television’s speaker to serve as thecenter-channel speaker, but I wouldn’t want my vocals to be piped through that little thing. Also, some people forget to turn the TV on when listening to music, and others do not have a center speaker. As a result, I like to pan some of the vocals to the left and right phantom center, with the majority of sound going to the physical center speaker with its own reverb and processing.
Why bother to use the center speaker at all? Some greatengineers, such as Al Schmitt and Alan Parsons, simply don’t use the center speaker on some of their projects, effectively making aquad mix. I don’t agree with that philosophy, and I think they’re missing a mixing opportunity. If you use the center speaker properly, you can widen the front sound field for more of the listening audience you find in a typical living room.
For instance, with a pair of front speakers, there is a very narrow sweet spot in which the stereo sound field is correct. Move a few feet to the left or right, and the image collapses to that side. A center speaker adds focus to the stereo image in the front,effectively widening the sweet spot so that everyone can hear the vocals (or whatever musical element you put there) coming from thecenter. The stereo sweet spot with a phantom center can never be as wide.
In addition, I have done various surround mixes during which I treated the left-to-center pair as one stereo mix and the center-to-right pair as yet another. That configuration works great with two percussionists, such as a conga player on one side of the stage and a regular drum kit on the other side. Pan the congas between the left front and the true center speaker, and the drum kit between the true center and the right front speaker. Remember that I’m talking about a virtual mixing stage; it has nothing to dowith the original positions in the studio. If you have the properlyisolated instruments on separate tracks, you can build your own surround stage with the joystick panning controls.
THE BIG BAD LFE
The low-frequency effects or LFE channel (the .1 in5.1) is the most controversial part of surround mixing, and it certainly offers the greatest potential for screwing up the mix.Its bandwidth is specified as 5 to 120 Hz. But do you need to put anything in it at all? If you choose to put some bass sound in it,do you exclude that information from the main or surround speakers?These are all good questions, some of which have not been thoughtout by some big-money mixing engineers.
First of all, you don’t really want to use the entire top end ofthe range all the way to 120 Hz. A brickwall filter at 120 Hz witha 48 dB/octave slope is applied to the LFE track when it’s encodedas DTS or Dolby Digital. That filter sounds pretty bad, so it’sbetter to insert your own 24 dB/octave filter at 80 Hz or even 60Hz.
In addition, don’t remove the bass from the other tracks andplace it in the LFE channel exclusively. If the listener chooses to listen to your surround music in stereo (a process called downmixing that is performed in the receiver or surround processor), the LFE track is thrown out. If you take all bass below80 Hz from the kick drum or bass guitar and place it only in theLFE track, that information will disappear if your listeners choose to downmix.
For music mixing, you never really have to put anything in the LFE channel. All home-theater systems employ a process called bass management, in which any low-frequency information isredirected to the subwoofer by filters in the receiver or surround decoder. It is better to save the LFE track for truly bottom-heavy things, such as the cannon shots in the 1812 Overture oran octave-down synth bass that goes down to 18 Hz. That’s how the LFE channel is supposed to be used.
Nevertheless, LFE bass is like a drug, and most mixing engineerscan’t stop tweaking up the LFE channel to make the mix really thump. The LFE channel has 10 dB more headroom than the other fivefull-range channels, but that doesn’t mean home listeners have 10dB more power reserve for their subwoofers. Most home subwoofers are seriously under powered and will run out of bass headroom long before the five full-range channels top out.
As mentioned previously, in stereo you can only pan each trackleft to right in front of the listener’s ears. But in 5.1 surround,you can pan left to right and front to rear, thus creating a wholenew array of audience positions. This enables you to make the band sound as though it is in front of the listener (with audience around it and slap echo coming off the back wall, as in a concert),or you can position the listener in the middle of the group or any combination thereof. You also can make the room spin around the listener’s head, put sound effects in the rear speakers forrealistic acoustic segues, and place your backup singers in therear of the room. The possibilities are endless.
Here are a couple of examples of different treatments for the same basic tracks. In the mix depicted in Fig. 6a, Ipanned the lead vocal to the center speaker, the guitar to the left front speaker, the keyboards to the right front speaker, and the stereo crowd mics to the rear speakers. I also put the backup vocals in the left and right front speakers. This setup gives areal room sound to the mix because the crowd mics pick up the slapback echo off the rear wall and add room ambiance. Reverb and vocal echo are returned to the front speakers, as in a traditional stereo mix.
In Fig. 6b, I panned the backup vocals to the left and right rear speakers and then added some delay and reverb to the center lead vocalist that is returned to the rear speakers by its own surround joystick. This makes the listeners feel as thoughthey’re positioned in the middle of the band. A second stereo reverb processor can be used on the same vocalist and returned to the left and right front speakers.
That’s the primary reason you need multiple reverb and delayprocessors for surround mixing. You really don’t need to pay $5,000to $25,000 for the latest surround reverb processor from Sony,Eventide, or TC Electronic (as fabulous as their reverbs certainly are). Indeed, you can create great surround mixes with just two or three stereo reverbs and delays. One reverb is returned to the front right and left speakers, a second reverb is returned to the rear left and right speakers, and a third reverb can be returned to the center speaker.
Surround sound is becoming more and more important for music aswell as movie soundtracks, and it behooves engineers at all skill levels to begin exploring this vast new frontier. Hopefully, the information I have presented here will help you to develop your surround-mixing skills to the point that groups will start beating a path to your studio. But until that time, practice the techniquesI mentioned and try out your own ideas, which will undoubtedly leadyou in some interesting directions.
Mike Sokol is a live-sound and recording engineer with 40 years of experience on both sides of the console.