Balancing music for films truly is an art. Why? For one, the mixer must understand music and appreciate it. Second, he / she must understand the relevance of music in storytelling. Third, he / she must be able to convey the emotion that the Music Composer felt as an artist to enhance the emotional value of a scene.
It is easy to write such things. But more often than not, it is a huge challenge. This is why experience plays a great role in this. For the sake of this blog post, I am writing about Background score and not the songs we have. Background score although in name is like that, has a true artistic value to it. Why?
Up until now, we were dealing with elements found in our life and around us. Real life has no Background score. So, the art of using this and blending it with reality is very subjective. Every person has their own interpretation of the role music should play in a given sequence or film.
This is probably the most important yet the most underestimated factor. For some time in our industry, there was a notion that music engineers didnt know about film and vice versa. That was true only to a limit. Let me explain.
The EQ, compression, specialisation and instrumental importance given to a piece of music heard without the context of dialogues, ambience, effects or sometimes visual is radically different than when these elements are present. As an engineer and an artist, you have to weave yourself in and out of the music constantly. It should be such that it supports the story and the vision the director has. There are instances where the music is the main factor drives the story forward and helps in time-lapse etc. But then, that should not be the primary objective. From my experience, while premixing music, I have made it a point to not commit any processing. My way of balancing music is also something that I learnt along the way and so may not be the norm, but then as an artist and engineer, I have to justify every move I make against the story.
EQ, compression and the nuts and bolts
I do not EQ or compress the score unless and until it is utterly needed. This is because, most of the time, the music contains samples or is recorded so good that balancing is what is needed. This is where a lot of times, I have received looks as like “What? No plugins for music? Why?”My philosophy is that unlike the real world where we do need a bit of manipulation to get the intended effect, music is extremely subjective. This is also the reason why it usually makes no sense to playback just the score after a mix. There will be dips and rides. Many times the instrumentation has to be rebalanced to achieve the same emotional experience that was present while scoring.
For example, reducing the level of a string section will drop the body faster than EQ. So, the Violas or even Cellos may have to be re balanced in the mix. Similarly, the music presence in a close up shot is radically different to a wide shot. The intensity changes, the pan changes the distance of the arrangement changes. All this can be done without too much processing.
I also realised that by the time I get the music score, it passes through an engineer who has lived with it for a longer time than I have. So, if possible, it is important for the music engineer to hear the mix in the context. If the emotional balance is right, then the mix is fine.
This has been a topic of discussion for a long time. Many times, I have also heard the reasoning that music engineers are used to a stereo field. That may be true to an extent. I can’t mix a stereo track as well as they can. But when it comes to surround, many seem to not realise that there is a center channel present. I dont mean to say that it has to be bombarded, but there are instances where the balance just seems off without the use of it. That is just my personal view and like I said, music is extremely subjective.
I do like to use the surrounds a lot, but there are certain styles I adhere to. When I mix, it important to have a particular height for the chair. I want to have my hands near parallel to the floor, and sit with a straight back. This is not because of ergonomics. This is because, as I experience the tension and joy onscreen, I tense or relax and change my body posture. This reflects on the fingers and the positioning of the faders. So for example, during a very intense scene, we lean forward, take faster breaths and have constrained movement. As the scene relaxes, our body relaxes, and we lean back to the chair, resting our back, thereby as a resulting motion my hand and fingers gradually bring back the faders accordingly dropping the overall tonality and bring out subtlety. It may sound whacky, but this truly is a way to bring out scenes.
It is in this context that I want to say how I use surrounds. Surrounds for me are decision makers on what I want the audience to feel. Do they want to be washed over, do they want to be focussed, do they want to just follow and go along the character, is it just a filler, etc. Surrounds are a natural extension of the space of the music. They have to be used as a tool for extending the emotional value that the score has. It is not wrong to put percussion in the surrounds. BUT, you have to understand the technicalities and the frequency split that happens.
With a sound format like Atmos, the surrounds are truly full range. But otherwise it isnt so. The first thing to have in mind is the relationship between the Panner plugin on Pro Tools and the Theater speaker arrangement. Many new engineers and mixers make this mistake. I too did. The point is this. On Pro Tools the panner shows individual surround speakers. In a theater it is an array. This means when you pan a sound in Pro Tools (or any surface or DAW) to the back surrounds, in a theater (unless it is a 7.1 mix) the sound is actually going to be played back from the array of speakers spread across the back and SIDES. You have to get your head around this and it takes some understanding and experimenting.
This is why in one of my earlier blog post I have mentioned why I really dont prefer Wall panning (as I call it). When panning and using the surrounds, make sure that there is no coherent waveforms. The reason is that sometimes during a fold down, the surrounds will sum and have a phasing effect with the front channels.
Use of the LFE is something I try and avoid unless it is really necessary. In fact, on the mix of Dibakar Banerjee’s Love, Sex and Dhoka, the whole film didnt have the LFE track in music. No one complained too! The screen channels carry enough strength for this. That being said, I do use it as an extension to get body and all from the score, but LFE for me is more suited for the Effects and should be reserved for that to maximise the dynamics.
The center channel for me is also something I look at carefully. I avoid mid tones there because thats the dialogue range and so to make rides smoother, I also sometimes notch out a tad bit in the mid range in the center.
I dont prefer to ride music. It just isnt meant to be. What I do is perform fader movements based on the tempo of the score. I usually mis barefoot and tap the tempo of the score as I mix the music against the scene. This helps me get the rides down smoother and also accentuate the musical movements within. (They are called movements for a reason!). It doesnt matter if you picture yourself as a violinist and mimic the body movement. It helps. Riding the music doesnt have to be necessarily just by the fader. I have gone to radical lengths in this! For example during Gangs of Wasseypur, I balanced a whole reel just by riding the wet/dry knob of the background score rather than the fader. My justification is that music as a balance and level doesnt need to change but the distance does. This idea came because the kinds of shots used were such. There was intense action in distance and close-ups. Dont be afraid to experiment in this as long as the integrity and intension of the score is not compromised.
My rides for dialogues happen as we enter or sudden sharp dips. I often cover such sharp dips with other effects to mask the change. Agneepath was complete with such tactics.
I havent gotten into the nitty gritty of what happens with strings against bass, guitars etc because that is a musical knowledge that an engineer should have. Instruments too have a conversation amongst them in a score. There is a call and response. But understanding the impact of an instrument like an acoustic guitar in a scene where the actor speaks in a low tone is important. It is important to know what are the range of instruments and the spread of frequency across the spectrum in order to achieve a good balance.
I dont prefer to call it music mix. I prefer to call it music balance.