During the mix of Agneepath, there were many things I learnt, tried, and experimented. Most of them were tips I got across from many other engineers and designers, some of them I modified. I want to share the ideas, and hope that this will initiate our sound community to share tips, tricks and techniques. No one needs to reinvent the wheel, we just need to pass on the concept. I may not be as experienced as most of the veterans here, but I hope that there is something in the ideas I am sharing.
We ran 3 protools machines. One with Dialogue, one effects / Foley / Ambience, and one with Music. In such a situation, I had turned off Delay compensation for mainly 2 reasons. One was to get more DSP and voices. And two, Delay compensation didn’t help when there were multiple machines in sync and each having a different delay. I manually corrected the delay. I ran the dialogues from a Protools HD 10 machine. I wanted to try the clip gain feature for dialogues. This helped me a lot. The reason being that the gain hits the compressor (plugin insert) before fader, so giving me a smoother dia to ride. A trick I tried was to ride the dialogues with the compressor bypassed, to smoothen any clicks and pops and jumps between takes. Then I converted all this volume automation into clip gain. This had two advantages. One, the compressor when put back, didn’t have to work so hard. And two, I got a unity fader position that I can now use for helping me cutting Dialogues through mix via volume if needed. But I quickly saw a problem in this. If any take was changed, it would become hard to match. (I still took the risk and did it.)This being a sync-sound film, had a lot of challenging situations. Like in the dinner sequence in the second half, the scene was a quiet one, but when it was shot, there was a ganesha visarjan happening just outside the home. That was a very difficult scene to tackle, but thanks to Lochan’s finding of takes and sync, we managed to salvage that scene.
While cleaning dialogues, I make sure that I have some form of ambience or fill running in mono. This helps in stopping me from overcleaning. Sometimes the noise in the dialogue blends with the ambience, and you find that you don’t need to clean parts at all. If the dialogues are heard, I try not to clean them in the premix stage because sometimes in an effort to make it squeaky clean, artifacts are introduced. So, once in the mix, I have a better idea of how much to clean to make it audible and legible. Music or ambience most of the times cover for the noise in the dialogues. I use a lot of compression to bring up quieter dialogues and have an EQ post compression so that the cutting frequencies don’t hit the compressor. Its because of all this, I tend to do my premixes in the box. I use mono reverbs for sync and ADR to match and add a secondary room reverb to the combined so that the overall dialogue feels part of the space. It is this secondary reverb that I spread left-right or lean into the surrounds as needed. I cut frequencies above 7k in the dialogues and make up for the highs by using an exciter. The advantage in this method is that the highs are generated from the remaining dialogue frequencies rather than upper noise floor in the dialogues. (Traffic or hiss). There was a lot of reverse reverbs and delays used (thanks to the new reverse parameter in Protools 10). This helps in bringing a rise to certain words for an impact. (There is a prominent example in the naam vijay chauhan scene in the ganesh visarjan. It was used as a rise to the word Mandwa that marks the finale to that dialogue sequence). It was sometimes used as an effect in combination with normal delays or reverbs so that the floating feel is achieved. Sometimes, I pan the reverbs separately in contrast to the reverse reverb. Sometimes the reverse reverb is kept in the center, while the dry and reverbs are panned. (Its an experiment. What works for the scene, I kept!)
In effects, Sharon was handling the premixes. I made it a point to try and use as much of mono reverb for foley. (A trick I learnt from Hari Dwarak). This helps in two ways. One, it can maintain perspective in a busy mix, and two, it can sit with the production tracks. I also cut frequencies above 7k in the foley to eliminate unwanted noise and also to make it a little warmer. (I sometimes use a little tube saturation for that.) I bring up the bass in the foley a little more than needed , but in the end within the mix, it gets that needed body for the foley. Whatever booming if arises, can be controlled with a multiband compressor. I tend not to do low cuts on foley, as it sometimes messes with the sync tone when played together. It also adds to the body that I end up cutting in the production track. Sharon had a track of the production dialogue. It helps to eliminate cases where the foley pans while the dialogue stays in center as we have limited scope of panning production dialogues. In effects, we used separate processing for the LFE. We had a regular send and a Process send. Things like gun shots or punches were sent to an external processor and then filtered for the LFE. This helps by making sure that there are no exact waveforms in the LFE channel, therefore avoiding the chance of phasing. Also, I used a sample delay of around 12 to 15 samples in the LFE master. This helps in distinguishing the main LCR low frequencies from the LFE channel thereby giving more depth in perception than what is there. Sometimes, I automate the compressor on the LFE and use negative compression on tracks (A tip I figured from Farhad’s post and modified). I had a lot of ambience in the center channel also. But to prevent it from sounding like a Diverged ambience, I had a separate eq on the board in the center channel of the ambience. This prevents phasing in fold down and also gives a larger spread and space in the ambience. Its easier for Sharon to just place the ambience in the front, and the EQ will take care of coherence. To make things a little larger than life, I expanded the sounds into the surrounds and used a tight reverb. This gives it some stay to maintain the big tone, and of course LFE! (A lot of the LFE was inspired from Leslie Fernendes’ mix. Thanks!!). The ambience ingeneral had a slight reverb in the surrounds just to make it different from the front element. The big elements (guns, and punches) were split into high, mid and low frequency elements by Stephen and his team. This helped in deciding what to give prominence based on what kind of music / Dia was playing at that point.
For the music, Rajiv was handling Premixes. We had a lot of Rhythms in the front and surrounds. But for the surrounds, they were a tad wetter (around 5-8 percent) than the front. This was to distinguish the signal coherence but at the same time, maintain the timbre and make it bigger. All big drums, huge hits, etc were treated this way. Strings were treated differently. The melody was more frontal, while the pad or fill lush instruments were placed towards the surrounds. This added to the grandeur of the score (A massive one by Ajay-Atul). I also, used the stereo to surround converter from waves for pads, piano and choir. This maintained the inherent dimension of the stereo tracks in surrounds and helped in making it larger. The LFE treatment was similar to what was used in the effects. When we mixed music, we had a lot of elements in the center channel too, especially ones like Bass guitar, kick, etc. Any instruments that clash with dialog frequencies were kept away. But sometimes to maintain a fluidity in mix, we used elements inspire of clash in dialoges. They were taken care of by an eq done in the mid range while running music with dialogue in the mix. This helped me from too much rides in the music while making the dialogues cut through. A lot of short reverbs were used in the percussive elements to make them stay. (A huge thanks to Hari Dwarak for this).
The mix was done on the Neve DFC Geminii. All that was needed was arriving as predubs or stems on the faders. I tried to keep the rides minimal while maintaining the flow through EQ and compression. Any major rides were purely for creative purposes. (Creating silences or fading in music pieces based on composition and scene). An idea from Parikshit and Kunal that I got was that loudness is always dictated by the scene and not the meter. While I did make sure that I avoided distortion and square waves, while going loud, I had some methods. One was sudden apparent loudness in the surrounds by varying its bandwidth before during and after the loud scene with eq. Another was compressing the hell out of it and increasing mids a tad. (1.5 – 3 dbs). Yet another was to increase levels, while cutting mids (Except dialogues) so that it doesn’t hurt the ear but the lower frequencies carry off the bigness. I tried to maintain a graph that helps in increasing apparent loudness when needed.
There is not much new in what I have written, but we need to start sharing. In the end its talent and not just technique that brings out a mix. Practice. patience and experience bring out a lot, and I know I have a long way to go. But if it were not for those who shared before me, I wouldn’t have got these ideas in the first place.
Thanks for this great summary.
But what do you mean by “……I tend to do my premixes in the box.”
You said this while talking about how you handle your dialogue editing.
Hi. It means I do my premixes within Pro Tools itself usually.
Madhab das said:
THANKS BHAIYA FOR THE SHARING …