I have been dabbling with film mixes for the past 10 years or so. During this time, there were a lot of things I tried and tested. Some of the experiments were successful, some of them didn’t really give me the intended result. I have made a note of most of it in Facebook as notes. This time, I thought I should write about how I approach a Surround mix, what the stage means, how I handle the surrounds and the LFE, etc.
I tend to see a film in 2 ways (actually more, but it is easier to explain). The story and scene dictates how I as an audience am in it. Am I part of it within the landscape, or am i supposed to be a spectator. This defines how i build my stage (the Left, Center and Right) and the Surrounds. The reason behind this is simple. As a mixer, my job is to get the audience experience the story and be part of it. The visual part is there for me as a reference. The rest of it, is an invisible and “experiencing” part. This is where it is important to shift between technicalities and aesthetics.
Let me explain my view of the “Stage” first. The stage consists of the Left, the Center and the Right Channels. (The LCR for short.) If it is a bigger atmos installation, or SDDS, it may have Left, Left Center, Center, Right Center and Right. (L, Lc, C, Rc, R. i.e. 5 speakers). In a properly calibrated room, all these speakers can handle frequency from 20 Hz to 20 kHz. It is a pretty powerful section to have. How do I approach the stage part of a mix? Look at it from a realistic sense. If something is happening away from us, and we are merely observing this, it puts us in a “safe zone”. I am not talking technically or so, but just the feeling that we are observing and looking at it from far, is comforting. Its because we all have a safe space around us and as long as that is not intruded, we are comfortable. This is an approach that will also work with story telling where an incident has happened and as an audience, the director doesn’t want us to know what it is fully. There is some secrecy in the scene. This approach can also be a good build up. We’ll get to that later.
Now that we have the stage, another question that can arise is what happens to the dialogues. Isnt that always from a single source in the centre? Yes. But there is also a perspective associated with that. That is what gives the realism in it. For example, I deal with really tight close up shots with a little more low mids and lows because that is more like a chest frequency and when someone comes really close to you and talks, this is what makes it prominent. This is not something that i do always, but I do look out in this and will do if it heightens the scene. This is also what helps when we have intense scenes between characters and have low pitched dialogues. Going close and manipulating EQ like this helps us from raising the levels and maintaing the intensity of the performance.
What happens to ambience and other things then? Doesn’t it have to be in the surrounds? That is something that I came to understand after many years of premix. The reason is that while premixing, I was hearing just the ambience and fx and so started to make it fill us in a real way. Like having Room Tones etc bleed to surrounds, having external winds etc panned back and all. They sound really good when that is being done. But when I heard it with music, there is always something that seemed to pull me out of the scene. I realised this over the years. When we go to the theatre, we sit comfortably and lean back. We expect sounds around us and so what happens is that our ears get used to this and we are in that space. But, having the ambience in such scenes more towards the front and even eliminating this from surrounds gives a big difference in the scene because then we are forced to concentrate on the screen and not having any sounds in the surrounds makes us the spectator of that scene as I described before.
I do have a certain way of handling surrounds also. For the beginning years of my mixing, I used to pan things around and have sounds partially back and all. But, what I realised was that I was always doing a wall panning. What is wall panning? Well, if you look at the 5.1 or 7.1 panner in Pro Tools, you will see that it is a square with speakers placed at the corners and one in the front centre. (If it is 5.1. If 7.1, there will be an additional 2 more in the rear surrounds.) All my panning used to be along the outside border. I never tended to use the inside space of the box. It took me a while before i could bring myself to change that. I realised that the reason was, we depend a lot on the phantom centre and spacing that the sound inherently has to give us the dimension. What I didn’t realise was that the phantom centre and spacing is only effective if given from two distinct sources. So, if we had a car pass recorded with the pan from left to right, it creates the image when the left channel is placed in the left pan and right channel in the right pan. It creates the phantom centre.
But there is an issue there. The phantom centre works in creating the image only if we are in the sweet spot. If we are off, then the whole shift of sound will be immediately apparent. This is also called the Haas effect where it says that identical sounds will come from the closest source. So, if you are near the left speaker, you hear that predominantly from the left. There is also another thing that not a lot have noticed. When the same frequency is played from a pair of speakers, there is something called acoustical cross talk, or comb filtering. This causes the sound created by a phantom centre to be different and coloured when compared to having an actual centre speaker. In fact, this coloration creates a comb filter that dips in 2k, 6k, 10k, and these are primary frequencies in the dialogue or prominent spectrum. This is why when you pan a car pass, it sounds more believable than using the phantom centre to achieve that. So, while using ambiences, and music, not addressing this will create more noise because the predominant frequencies will be dipped. This is not noticeable usually because we are used to hearing that. But that is also why of you track lay on a headphone and hear that on speakers, the presence and intelligibility changes.
Coming back to mixing. This is how I handle stage and surrounds. I also use the surrounds for impact. So, there may be instances where i have the audiences used to the stage sound scape and have elements build up in surround as the story moves forward. This engages the audience from being an observer to becoming part of the story. ( i got this idea from 2 mixing approaches called Stage and In the Band). This approach however has to be handled with care and caution. It is very easy to lose sight of things because films can be around 6-8 reels and you need to have that in your head if this is the intended path of approach. This also definitely requires a discussion with the designer and director to be able to pull this off. (In fact, Love, Sex aur Dhoka has 3 stories mixed in 3 separate ways. The first was a documentary style and had LCR mix, the second was a security camera one and was mixed in Mono, the third was from a hidden camera on a shirt and was mixed in 5.1. All in one film!) This is different when I approach songs. (I use a similar approach to Background music but is too much to talk about now.) I approach songs on what is there on screen. What is the energy required, what is the placement needed etc. I always make sure i have instruments filing the spacial position without using a phantom centre. It can be a bit difficult getting it with percussion, but I rarely tend to leave percussions as a Left-Right mix. (This is a personal approach and there is no hard rule to this.) I use the centre for as much music i need. For example, it will have elements of the bass, strings, instruments etc. I don’t have a wall pan (as i mentioned earlier) for things like pads and strings just because, mostly the elements in the pad are used to fill and bring a certain emotion with the chords. I try and listen to the song on a headphone if possible as that gives me a sense of space of what the Music engineer needs. He has lived with the song more than I have and I try and get them to listen to the 5.1 or 7.1 mix to make sure the space is translated correctly. This also helps getting the balance right faster. But there is a difference I have felt many times. The reason is that Music engineers are very used to and literally lived the music mix in a stereo field. There are 2 things to note in this. One is the engineer needs to get used to the space and balance. The second one is that there may be times when the song in the flow of the narrative wont need the surrounds or space as what was in the original mix. Two very prominent examples are the song mixes in Kaminey and in Gangs of Wasseypur. Kaminey had an intense mix for the stereo part. (I did call Farhad to listen to this!) Translating that (songs like Dhan te nan and fatak had a distinct space) needed changes to the eq, and balancing. The mix was done where elements were placed in the space very close to what it was in a headphone balance. The voice was not just centre. There was divergence applied to the voice to spread it a bit to left and right. Now, I hate divergence especially when it comes to ambience and effects. But they impart a lovely spread to musical instruments. Now, placing the voice as i mentioned, isn’t a rule of thumb. In Dhan te nan, there were dialogues in between the song and also effects. The spread was gradual, using a beat or a change for this, as to not seem noticeable, but also get the energy needed from the song. A completely different approach was present in Gangs of Wasseypur. There, one song (Moora) was treated in such a way as to lead the character as he moves along the terraces into different rooms. Compared to a conventional song mix, this was radically different as the song was moving and was part of the character, thereby even taking the reverbs of the different rooms it went through. It also starts off as a mono song and becomes a 7.1 primarily because I wanted the audience to focus on the character and then be led by the song. The Dubstep that is part of the climax in this film is another mix that is completely different from the original. In situations, like this, it is a huge help to have the designer, music director and the director on the same page and understand the reason behind doing something radical. After all, they have lived with the tracks more than I have and anything I do like this must be justifiable in context.
My approach to foley is completely different. I use a lot of mono reverbs and try and keep my foley reverbs in the front and as close as to the dialogue reverbs. But, i don’t have the same kind of reverbs for dialogues and foley. The reason is that tonally they are different and so the reverb algorithm reacts differently to each. So, to get it similar sounding, I use similar but not exact reverbs. For example, i like to have less High frequencies and more lower mids in foley reverbs compared to dialogues. That is what makes it believable. Also, my personal experience (I may be wrong) is that even if you are in a large hall, the dialogue reverbs in a real word can be more felt in the side reflections and behind reflections than the footstep of the person walking. Getting that balance right isn’t easy, but it isn’t difficult either.