Imaging Expectations
I’ve been wanting to write about this for a LONG time now, and since there’s some recent stuff floating around I figured this is probably a good time. Today I’m talking about imaging or basically what I do with the pan knob when I’m mixing for sound reinforcement. It’s hard for me to talk about panning without first getting into some of the bigger picture “why” so this is sort of two posts in one.
But before I dig in I’ve got a little bit of homework for you because I’m going to refer back to a couple of things. For starters, Bob McCarthy recently republished an old article of his on his blog. For those new to the party, Bob McCarthy basically wrote the book on system optimization. You can find Bob’s article HERE and make sure you read through the comments because Bob has some more to say in there. After that, check out the bootleg of part 1 of Robert Scovill’s FOH breakout at Gurus in Chicago. You can find that HERE.
I know some guys will debate this, but I have to say I pretty much agree with these guys; there is no stereo in live sound. Stereo just doesn’t scale the way we’d like. I wish it did because as a mixer, it would make my job much easier. We can do some cool little FX here and there with a left/right PA system, but how we approach mixing for a large space requires a different approach from the good-times stereo stuff we can do in the studio.
Bob McCarthy’s article goes into more of the technical and psychoacoustic-y stuff on why it’s hard to make stereo happen due to the time related issues that affect localization and create potential destructive interference(comb-filtering) resulting from overlapping coverage patterns. This is part of why film mixes utilize a center channel that covers the entire movie theater. Dialogue is typically kept within that center channel because if it came from the left and right, the overlapping left and right channels could cause trouble in some seats in larger theaters.
So what does this mean for me as a mixer? Well, let’s go big picture for a minute. I believe every person that walks into a room to listen to live music comes in with a couple of expectations that unfortunately sometimes compete against each other. First there is an expectation for what music should sound like. This typically comes from a lifetime of listening to recorded music via records, CD’s, iPods, radio, TV, whatever comes next, etc, etc, etc. Unless someone’s been living under a rock, it’s a good bet they’ve been exposed to a ton of recorded music over the course of their life. Music is everywhere.
The second listener expectation comes from what a person sees when they walk in the room. This starts with the vibe/aesthetics of the room and then includes who’s on stage and what instruments are seen played. If you walk into our main auditorium, as Robert Scovill puts it, “the imagery is huge.” It’s a big room with a big stage and big speakers suspended above it. The brain starts to make some assumptions on what it’s going to experience when it encounters this sort of thing. Walking into a dark, smokey club creates different expectations from walking into a fancy-pants theater downtown. I still remember the feeling I had walking into my very first concert and seeing the enormous clusters of loudspeakers.
Now as much as I’d like to dismiss the visual side’s impact on hearing perception, I really can’t because what we see has a substantial impact on what we hear. Our senses all work together to give us a complete experience. Just check out Poppy Crum’s demonstration in the Audio Myths post I did last year.
Do the words we see on a screen affect what we hear? Yes. If we can see a person’s mouth moving, will there be a different level of clarity? Of course–just think about how distracting lip sync issues can be with video. Now take this a step further and ask yourself, what does an audience member expect to hear when the most animated person on stage is the drummer pounding the drum kit? What does he expect if there’s a kid up there windmilling on a guitar? What are if there’s a spotlight on one particular person? Even just take into account the location of things on stage. What does one’s brain expect to hear just based on seeing guitars downstage and keyboards upstage or vice-versa? Like it or not, there’s a mix building in the back of a listener’s head before we even push up a fader.
On the one hand, these visual cues can give us a mixing advantage almost like a subliminal fader boost. This is part of why I believe it’s so important to keep an eye on the band during a mix to have insight into the same way an audience will perceive things. Failure to take the visuals into account can lead to a mix performed out of context which is a potential disconnect for our audience. On the other hand, what is seen on stage can also compete with what’s good for the music. Robert Scovill pointed this out in his breakout. Maybe a singer is having a bad day and should get tucked in a little bit in the mix. Maybe someone in the band is flubbing a lot of notes and can’t get it together. And how about the whole worship leader playing an acoustic guitar? Some of those acoustics really need to sit back in a mix–sometimes even WAY back–even though the person playing it might be front and center.
Finding a balance between these two subliminal audience expectations is a big part of the job of mixing for sound reinforcement, and failure to balance these expectations can result in a disconnect for listeners that can pull them out of the moment. Live music and especially live church music is all about creating moments. We either facilitate that or destroy it.
Now when I’m mixing for sound reinforcement, my hope is to create an engaging sonic experience that connects with what is visually on stage. The music needs to resonate with what is seen to maximize both clarity and impact. A quote from producer/engineer Chuck Plotkin comes to mind when thinking about this:
You don’t actually have to be able to understand the lyrics; you’ve just got to feel like you could if you wanted to. – Chuck Plotkin
If I were to reinterpret this for the live environment it would probably go something like this: The audience doesn’t need to hear everything. They just need to feel like they could hear everything if they wanted to.
So let’s go back to the worship leader playing a guitar example and say our player’s strengths definitely lie more on the vocal side than instrumental. When I’m doing my homework for a mix and later going through rehearsals, I’m listening and making mental notes on space(s) where I can pop that guitar out along with spaces when it will need to pop out. For example, maybe I’m sitting the acoustic back in the mix during a full band section where it’s audible but not necessarily distinct. Maybe there’s a breakdown after the second chorus where it’s just our fearless worship leader and his acoustic guitar. That acoustic probably needs to come up in that section so I’ll bring it up and then sink it back in as the rest of the band comes in.
A little move like this brings a couple of big advantages. For starters, it reinforces what the listener’s attention is most likely on. It helps meet their expectation of what they should be hearing. If the band breaks down and the worship leader is banging away on an acoustic that can’t be heard, it’s a disconnect. The other advantage of this is by moving things up and into the foreground of the mix, I can also help direct a listener’s auditory and visual focus. For example, don’t look at the keyboardist playing a pad in the back of the mix, look at the guitarist playing the hook. Sometimes just a momentary bump on the first couple notes is all it takes bring something out in a listener’s ear. When a sound sticks out, you will look for it. You probably already do this whenever you hear a problem. Of course, these little moves should never be arbitrary; they always need to be musical within the context of the song.
Outside of mixing, we get back to how this whole post started with another listener connection tool I use. This might be referred to as localization or imaging, but it’s basically what do I do with the pan knobs. Personally, I do pan things a bit, but I will say it’s something I’ve gone back and forth on over time. You could call what I’m doing these days localization, but not necessarily in a true sense. Since we have no true stereo image, there’s no way to properly image an instrument for every seat so that the sound actually sounds like it’s coming directly from the player. What I’m doing is more of an imaging cheat where I tend to soft pan instruments towards the side of the stage where they’re located. Robert Scovill had a great description of this in his breakout where he called it “wide mono.”
There are a couple things this soft-panning does for me. For starters it makes it a bit easier to get critical center information to cut in the mix which in modern music is going to be the vocals and rhythm section. The other side of this is a mix cheat towards the visual expectation of the listeners out in the house. If you are seated/standing on the left side of the house, would you expect to hear more of the musician closer to you or farther from you? If we were in a smaller room with amps on stage, you’d most likely expect and hear the closer instrument. I think it can be a little weird to be sitting on one side of the room with a musician directly in front of you that you can’t hear. By soft-panning I’m giving that instrument a slight edge in the mix on its respective side of the room. But please make sure you understand this because I’m not eliminating the other instruments on the opposite side of the stage; I’m simply trying to cheat the mix a hair towards the visual perception. Panning is maybe 1-2 or 11-10 o’clock max because I still need to maintain overall sonic information of the band throughout the room. One way I look at it is I want to maintain the spirit of the mix across the entire room. Perspective might change a bit if you move to another seat, but the overall tone should stay relatively consistent.
Now, there are dangers in doing this. For starters, if mix position is off-center–like mine–when you’re balancing soft-panned instruments, there can be a tendency to push the farther instrument up a bit higher than it belongs weighting that instrument and potentially the entire mix louder on that side of the room. This is something that can usually be checked fairly easily in headphones, though. Personally, I also like to keep my group meters on the VENUE in view in RMS mode so that I can see the average level in the mix between things at all times. There is a natural tendency for a mix to gravitate towards the perspective of the mixer(yet, another reason why mix position can be crucial). In the heat of a mix, I can spot-check meters and know that a major discrepancy in level probably means my balance is off.
Another danger is that soft-panned instruments might lose their prominence on the opposite side when they musically need to stand out. A prime example of this is a lead guitar playing a hook or guitar solo. Lighting and IMAG are going to focus on that player, in theory, and musically the part will probably need to come up as well. In a situation like this I usually use a technique I’ve seen practiced by Scovill and James Rudder (Hillsong United). Basically, I’ll take the instrument that’s soft-panned, mult it to another channel and pan it the exact opposite of the original. When the lead comes in, I push the mult’ed signal up. Think of it this way, if we put the exact same signal information in two speakers, what do we get? We get mono. My mult fader basically gives me a pan knob on a fader. Pushing up the mult into the mix adds a little bit more signal on the already soft-panned side while also bringing up the level on the opposite side. These days I typically will mix with my mult fader around 10 dB down relative to the original because I’m not looking for drastic swings in level when doing this. Moves with this fader need to sound natural and musical.
The ability to handle imaging this way is largely system dependent, and I personally prefer a loudspeaker configuration that goes from left to right across the stage. I have to say it was cool hearing Robert share my disdain for the alternating left/right configurations that seem to be popping up in a lot of venues. I’ve worked on those setups and even implemented them myself and have found that they ultimately drive me nuts and tend to cause problems. Not only do they create imaging challenges for music mixing, they often literally screw up audio-for-video FX. I find it extremely distracting to sit in a room watching a video where something flies across the screen, and the sound moves in the opposite direction. I don’t think this isn’t a subtle disconnect for a listener. It sounds like a mistake! Maybe someday I’ll talk about different PA configurations and the challenges that go with them, but I’ll just say this for now: nobody has been happy with things in our rooms until I’ve gotten them configured to where the left side of the room is Left and the right side is Right.
At Guru’s I said that mixing in mono can be difficult especially as the complexity of the band rises, and as you might have gathered from this little dissertation, mono mixing is a big part of sound reinforcement. This was one of my favorite quotes from Bob McCarthy’s article:
If the priority is to make the entire band enjoyable for the whole audience (and I expect it to be this way), then, leave the stereo as a special effect.
Stereo effects can still be cool in a big room, and in some cases might actually be cooler since everything is magnified. However, when it comes time for the music mix, the wide stereo mix doesn’t really translate. We can cheat things a little bit to the sides to make our mixing job feel a little easier and to help reinforce visual expectations, but care must be taken to keep from alienating the listeners on those sides. Even with conservative panning, I still find it extremely important to walk the room if I’m on a system I’m unfamiliar with. At the end of the day, though, success in this is ultimately measured by an audience’s level of engagement. If the band is nailing it and the audience connecting, the mix is doing its job.
Great insights! One question: Do you hard pan drum overheads, stereo keyboards, Hammond, etc.?
Bill:
In the room, I don’t typically hard pan drum overheads.
With stereo keyboards, it depends a bit on what they’re playing because like some of the “stereo” guitars out there, they aren’t always really stereo. Sometimes it’s a mono thing with a stereo effect on it. A lot of times I will leave them panned out if the musical information is consistent on either side because I don’t want half the room getting just left hand and the other side getting the right hand. It’s case-by-case. The bigger issue I find with keyboards is guys playing too low and getting lost in the guitars.
With our Leslie I’m OK with hard panning it because that panning is more of an effect since each mic gets all the notes.
Great article, Dave! It’s good to know panning’s place in live sound.
Also, I noticed a typo: 4th paragraph, last sentence: “left and ride channels”
Thanks, Tyler. Typo fixed.
Dave,
Thanks for the words, well said. This really hits home the “mixing” part of mixing – instead of just managing levels.