Spectral Management
A few months ago, I announced I was cutting back on blogging to record a new Color Theory album. Instead, I was hired to mix three others (Exhibition by Die Brücke, The Deadliest Fairy Tales by Rain Rain, and a yet-to-be-named album for 907Britt). Since I’ve been living and breathing mixing since June, I thought I’d give my ears a rest and share my thoughts on spectral management.
Spectral management sounds like something you’d hire a firm to do, but it simply means finding a place for each instrument in the frequency spectrum. In my last mixing article, I described how to tighten the low end of the mix using a frequency analyzer. When the competing rumble and mud is removed, you’re left with tight and punchy bass. The same philosophy applies to the rest of the mix.
Taking the concept to its logical extreme, you could carve out a discrete frequency range for each instrument using sharp low and high cut filters, like so:
While it looks good on paper, it’s a bit heavy handed in practice. It might sound okay when all the instruments are in, but removing any piece of the puzzle leaves a noticeable hole. Forget about soloing anything, because every track sounds terrible in isolation. That’s because most instruments have energy spread across the entire spectrum. Focusing on a narrow band removes the fundamental frequencies below it and the overtones above it, altering the timbre of the instrument to the point where it becomes hard to tell a violin from a trumpet.
Instead, I try to emphasize each instrument’s basic character. I’ll bring out the 80 Hz thump of the kick, emphasize the 300 Hz meat in the guitars, find a sweet spot in the 1-2 KHz range on the snare, add some 3 KHz presence to the vocal (more on that below), and maybe a dash of 15 KHz sizzle to the overheads. My simplified spectral map looks like this:
The only non-bass instrument I make room for is the vocal. I set it off from the rest of the mix using complimentary EQ. Typically I’ll use the Neve 1081 on my UAD-2 Quad to boost the vocal a couple dB at 3.3 KHz, and dig a hole out of competing instruments at the same frequency by the same amount at the same Q. The boost and cut are mirror images of each other.
The bass and drums usually don’t compete with the vocal, but I always dig the hole out of a guitar, pad, or piano. Backing vocals are a tough call. Sometimes I’ll dig the hole to differentiate them from the lead, and other times I’ll give them the boost to allow them to be heard in a dense mix. Don’t forget the reverb returns! Digging the hole out of the vocal reverb gives the vocals added clarity and immediacy, even after turning up the send to compensate for the loss in level.
Lead and solo instruments that play when the vocal isn’t in also get the boost. I emphasize whatever is at center stage at any given moment.
Sometimes I’ll use complimentary EQ on a second instrument (usually piano or guitar), but going beyond that yields diminishing returns and transforms the musical journey into a science project. At some point you just have to turn the knobs until it sounds good.
For a more detailed technical discussion of spectral management, check out this article by audio legend Dave Moulton.
Brian Hazard is a recording artist with fifteen years of experience promoting his seven Color Theory albums. His Passive Promotion blog emphasizes “set it and forget it” methods of music promotion. Brian is also the head mastering engineer and owner of Resonance Mastering in Huntington Beach, California.
Reader Comments (3)
Good post. "Sculpting sounds" with EQ can be a very powerful tool but it worries me when people use it "by default". I also use it selectively, but I'm cautious about your emphasis on 3.3 kHz - that's often a very nasty frequency, in my opinion. It depends crucially whose voice you applying the EQ to...
Here's a great tool for people trying to get their heads round this kind of thing:
Interactive Instrument Frequency Chart
and here's a blog post I wrote along similar lines.
7 crucial EQ bands to help balance your mix
Ian
My goal is to get away from EQ as much as possible, but the fact is thick arrangements or lots of tonal overlap will probably see you using EQ. In electronic music where there can be plenty of instruments, drastic frequency skew (basically the spectrum leans on the "artificially enhanced" side, which is appropriate in this case) and lots of harmonics, I'm not afraid to get down and dirty. I didn't get a recording with instruments that already dictated their own bandwidth like a 4-piece, orchestra, ensemble etc. - so I don't feel bad if my settings appear extreme.
Still, the last vocal I had was devoid of an EQ insert.. Only compression. That was it.
Ian, you are right that 3.3 KHz can get nasty, depending on the singer. The ear is particularly sensitive to that range, which is why I put the most important element of the mix there. My current mixing project has a female singer, and I've lowered the hole/boost frequency slightly to 2.8 KHz (using the UAD Harrison 32C). Thanks for linking to your clear and well-written article!
Synonym, I used to share that same philosophy about EQ. I preached that any "EQ problem" could be fixed by altering the arrangement. Now I use an EQ and compressor on pretty much every channel. I find that I'm more heavy-handed with paid projects, because I want to get them done quickly and if possible, under budget. With my own stuff, I spend forever and a day finessing each part into place, using EQ and compression as sparingly as I can get away with. But if I'm honest with myself, listening back, the paid projects sound better. :)