It took some time before this post came. This is a difficult section to address without getting really complicated about things. So, it was a bit of thinking and diluting and compressing a lot of things to this form that took some time.
This is a section that requires a different kind of sensibility to music. The reason is because FX are very real world sounding. (Unless it is a design element or something purposely different). It is necessary to gain a sense of understanding of how real world reverbs sound, how foley and other effects sound, how does ambience surround us, how do we notice something when it is on the stage and when we are engulfed with it, etc etc. The list can be endless. Don’t get me wrong. Music engineers can be good with effects, but then this is always an acquired talent. The reason is because from a very young age we are used to hearing music through speakers where as we just take the other sounds for granted. Apart from that, it is also important to know what sounds can be prominent at a given time. That being said, I have Synenthesia. This is a state where sounds translate to color for me. That has been the case from a very young age. And it used to be very disturbing at times like car horns can be anywhere from grey to red, birds can be blue in sound. A crow caw in my head believe it or not is steel-grey!! I do not know why, but how it affects me in a film context is that the color tone of the film greatly affects my eq and tonality of dealing with it. Probably this is the reason I am unable to mix music pieces without a visual reference. This one thing also helps me in handling the balance of music and effects when it comes to the final mix. More on that in a later post.
We are blessed with selective listening. Otherwise we would end up hearing everything that produces sound and therefore wont be able to concentrate on anything. For example, you can always understand a conversation in a bar or within a crowd even if the sounds are really loud or noisy. This is a principle that should be effectively practiced while handling effects. The world of effects can be classified into hard effects, foley and ambience for a basic understanding. There is also design, non-digetic effects etc etc that I will try and touch upon.
Hard effects are ones that are present prominently on screen. For example, Car passes, Doors, Punches, etc etc. It is important to understand that these should be panned according to the real position. Which means, like I mentioned in my earlier blog post, if you have a stereo recording of a car that travels left to right, don’t set the tracks to left and right speakers and let the phantom center do the job. That is a huge NO. Also, dont try and cheat by having the sound placed midway between the left-center and Right-center. The reason is that the extremes wont sound good and the sound would end up turning wide for a point source.
Always use the pan across the stage when needed. It has to pass through the center channel. That is what makes a realistic listening even if the listener is not seated in the ideal center of the theater. For accurate pans, if you are premixing within the box, you can use half speed playback in Pro Tools for automation. This is by pressing Shift+Spacebar and that will play the session in half speed thereby giving you more time to pan a sound accurately. It is possible to have various techniques of automation snapshots, cuts etc to achieve the pan, but I sometimes find it much more natural to have the pan with the hand.
This is a class in itself. The reason is that it comprises of sounds that are created and dont necessarily exist in the real world. Or they can be sounds that are created for the purpose of evoking an emotion from common sounds. That being said, the Pans, levels, EQ, Compression and Reverbs is what defines this. The interpretation of such a scene is very important to execute such a moment.
I always go with my first intuition of the tracks and build my interpretation based on that. It is very very important to understand what sounds are there in the track, and why they are placed. Always respect the sound editor! Are some of the sounds options, are they all supposed to be blended well, do they work well with the music, are they clashing with any dialogues are some of the points that have to be thought of in the way you decide to balance this.
I do not try and make a clinical balance of such scenes, or rather any scene. The reason is that they sound shitty. Yes, dont get me wrong, but as human beings we are used to minor imperfections. Thats why most of us love an analogue tone rather than complete digital ones. Thats also one of the reasons I dont like to overly sharpen my tracks, as it takes me out of the scene. That being said, when you make a decision and a balance, you have to be able to justify why it was done. For example, around 10 years back, it was almost a norm in India while mixing punches and guns that they have to be in the dead center. I wasnt a huge fan of that. Lets look at it. It depends on the kind of film. Bollywood films that I worked on at that time and majority of my career are hyper-realistic films. Things are always larger than life. Why in that case should these sounds be treated realistic when the audience is already in suspended disbelief? I started panning punches and gunshots to be little wider than what is the norm, still giving around 80% of the weightage to the center channel. I did have my shares of mistakes and errors. The reason being that it sounds big without being loud, but if used constantly, ruins the contrast and the dimension of a sequence.
This is my favourite section. I have a weakness for such tones as they tend to be very realistic and instantly put you in the space of the movie. That being said, it is also the most underestimated section in film sound. The reason being that music tends to cover this. But we’ll get to that section later.
I almost never use divergence in ambience. The reason is that in my mind, it feels like a cheat in the realistic space. Ambience is never a big mono. This is also the reason why I usually dont prefer a sound recorded in 5.1 (unless there is really good spacing among the microphone.) The spacing that can be created with multiple stereo elements layered is something that has a deeper immersive effect than one recorded plain. It also gives us a lot of options to manipulate the space for emotional reasons within the story telling process.
Personally I use more mono reverbs in the ambience than stereo reverbs. This may sound counter active to what is needed, but it is an amazing feeling of placing mono reverbs in a space with a slight offset in position of the dry signal (like a bird), that can be surprising and subtle. Ambience is always something that is subtle. It is never overplayed or underplayed. I hardly ever ride this. What I always do with ambience compared to the effects, music or dialogue is to constantly change the spacial width. It is easy to manipulate the size of a place by changing pans in the ambience. It is easy to trick the audience into different states by removing elements from ambience. For example if a scene goes from a normal conversation to really intense one without a change in the delivery or pitch, it is easy to heighten the scene impact if we start to remove the natural elements like birds, winds, etc and remain with the man made elements like the fan, or cars etc. The reason is that emotionally we are more connected to natural sounds and that puts us in a comfort space and man made sounds put each one of us in their personal space that may be disorienting sometimes.
There are no major tricks that I use here except that I dont over eq or compress and try to have less high frequency elements and have more body. A main factor I do look out is to try and keep the center with constant ambience for continuity and maintaining a natural flow. Kind of a glue if you may will.
Mixing foley can be really difficult if you are dealing with tracks in the wrong tonality. Foley sounds that are sharp and have very little body are very very difficult to manage than a properly recorded one. The only way to get an idea of how it should sound is to observe the foley as you move around in daily life. For example, notice that not all footsteps are heard at the same level, not all shuffles are heard. Some are noticed for attention and some can be used as an effective misdirection. All of these are a key element when balancing foley. As a rule, I always make sure I listen to just the foley with the dialogue. The reason is that this is the first element that makes thing real. Ambiences are there to manipulate the space. Everything other than that has to glue well. Foley and Dialogue pans especially have to match. You cant have the foley footstep move from left to right while the dialogue remains in the center. Little things like this have to be taken care and can be notices while running these elements together. It also gives a fair amount of idea of the reverbs and their match. The reason is that from my experience, the reverbs used for dialogues usually dont directly match foley. There is some amount of tweaking and tonal change that is needed. This is because unlike dialogue, foley has a lot of transient sounds and they react differently to reverbs. Sometimes, I send a compressed foley mix to a foley reverb. This makes it possible to increase the wetness as I need without having the transients bring a reverb spill. This takes some listening to before deciding what fits the right way in foley.
I usually try to cut sounds above 7-8 k. Frankly, any sound above that being recorded as foley usually contains noise. One place to take care is subtle shuffle sounds that may get lost if this cut stays. Footsteps, hand movements, taps all get this cut. It also helps make the reverb sit in a more natural way.
Once the foley is done, I listen to this against the dialogue to quickly identify if there are any mix errors like pans etc or level jumps. Once that is done, I introduce the ambience and effects. The reason is that at this stage I want to get a feel of the real sounds that are there in the scene before delving into the music part. It also helps me deicide later during the final mix of how much music needs to play a role in the scene and where it can enter, where to leave, where to keep it anticipatory (yes, you can achieve that with levels!) etc. I will try and cover this in a blog on the final mix later.
Hope its useful! Do let me know if there is something that is of interest and I can try and cover it in a post sometime.
Just wanted to let you know this: your blog is a great read and i love that you share so much insights on your practice. As a sound editor and designer it’s always good to hear how other mixers approach your material.
Thanks a lot Arnoud!