fbpx

Garnish Music Production School

Vocal Processing

Microphone Modelling

Different mics have different sonic “signatures.” A lot of this involves the microphone’s distinctive frequency response, so mic modelling software or hardware analyses a reference microphone’s response (along with other
 selected characteristics), and applies this signature to your mic. This process works best when the modelling software analyzes your mic as well – so it knows exactly what type of compensation to apply – or if you have a mic that is recommended as a signal source for the modelling device.

So can you really turn a Radio Shack mic into a vintage tube Neumann? No way. Granted, it may sound more like a Neumann than it did before, but no one’s going to prefer it to the real thing. However, with a good source mic, and if you don’t stretch the model too far (for example, having one dynamic mic sound like a different dynamic mic will probably work out better than trying to make it sound like a small diaphragm condenser type), mic modelling can be a very useful tool.

The issues are the same with modelled guitar amps. Clearly, plugging a guitar directly into a board through a modelling preamp is not going to feel the same as playing through a guitar amp and cabinet. That’s not surprising, but what is surprising is just how close you can come, and how by the time the track plays back, few people can hear the difference between the real thing and the simulation.

Mic modelling isn’t a replacement for a good collection of mics, but it can take a good collection of mics further. Even if simulating other mics isn’t your main interest, the complex response curves created by applying mic modelling have uses in their own right.

I have also used this plug-in to compensate when the vocalist moves in/out of the mic’s sweet spot. Using the proximity parameters, I have sometimes found this effect useful for improving a bad recording, even if ever so slightly.

Pops

What we term ‘pops’ in vocal recordings are produced by the consonants P and B when sung or spoken into a microphone. Although you usually deal with them at the recording stage (via a popshield, hi-pass filter, etc.) there are times when it’s not enough. Here again I use a plug-in to process the file directly, using the audio editor to select the desired portion more accurately:

  • Use an EQ plug-in with a high-pass filter.
  • Select the popped area just before the tone of the word and set your EQ filter to roll off everything below 150Hz or so, before processing it. You might have to split it into two regions so you can crossfade for a smoother transition.
  • If you still hear a pop, repeat the process but delve a little further into the word (beyond the pop), and redo the crossfades.
  • Experiment with different frequencies and slopes on the high-pass filter to achieve the best results.Before consolidating your vocal composite into one audio file, keep your original tracks, perhaps saving them as another arrangement — just in case you need to redo a fade or change a word, for example.

Pitch Correction

Now that you have created the best possible performance, that you should be all you need to do, but there are times where some tuning inaccuracies can still be present.

The vocal is the one element of a recording that can’t yet be emulated by a computer, but it’s increasingly encircled by a rapidly expanding pool of plug-in processors, among them tuning tools that are mainly designed to correct pitch inaccuracies. There has been a continuous debate about the use of pitch correction tools, both technical and ethical. DAWs have become synonymous with a quest for perfection that can leave music soulless. However, correction of pitch started before the use of DAWs. Using pitch shifters such as the Eventide H3000 was fairly common in the 80s and 90s, but the emergence of a device by Antares called ‘Auto-tune’ changed the approach in producing vocals quite radically.

It all started when Cher made us ‘Believe’. Extreme Auto- tune creates the yodelling we all came to love and then hate in the late 90s. Since then, every year sees the arrival of two or three new pitch correction tools, among the leaders are: Melodyne (the professional version is a standalone application that can do much more than simple pitch correction, and possibly a better job at that than Antares’ Auto-tune), Waves have released a one (it looks like an hybrid of the Auto-tune and Melodyne), the SoundToys ‘Pitch Doctor’, and TC Electronic’s ‘Intonator’. There are many more.

Autotune’s primary use is to correct tuning inaccuracies, but it has and still is used for more creative effects.

Pitch correction applications, such as the Logic Pro utility (pictured) could most usefully be thought of as a real-time time-stretching algorithm. You can manually highlight the notes that you want to hear and increase or decrease the response speed to suit your needs. A slower response time may sound more natural but be unable to cover the mistakes, whilst a faster response time will get rid of the pitch discrepancies at the cost of the naturalness of the voice. Using a fast response time will result in a synthetic, almost robotic quality, which will be very noticeable if not desired.

The Quest for Perfection

“Perfection is boring.” Geoff Foster, a veteran engineer at Air Lyndhurst, was recently quoted in print as saying. “To some degree, Cher’s got a lot to answer for… Sure, [Believe] was an extreme example and meant to be an effect, but the public bought it, literally.

“From a pop standpoint the public said, ‘We don’t care what our vocals sound like’. But in a way, that was a testament to the fact that the musicality of recording can
 easily become secondary to the technology. Cher can actually sing, but the current generation of pop stars have been given a mandate that they don’t need to.”

He goes on to say that it’s not just the artists, but also a new generation of audio professionals who have bought into vocal tuning as a de facto standard. “There’s a whole generation of engineers who have grown up thinking that the first thing you do when you get into a recording session is fire up Pro Tools, ready to do repair work,” he said in the same published interview.

Not all engineers see it quite that way though. Josh Binder, a twenty-something engineer/programmer in Los Angeles, says: “When you’re working with a great singer whose pitch is right on, you can still apply Auto-Tune,” he says. “I’ll throw a chromatic Auto-Tune [patch] onto the vocal with a kind of mellow responsiveness level, which gives it a nice chorus/flanging effect. I’ll print the effect to a separate track and then paste it into the comped vocal mix at the end. You hear that kind of sound a lot now on female voices, like Christina Aguilera, and on a lot of really soulful R&B vocals. It’s not there to fix the vocal; it’s there to be part of the vocal sound.

“You can also use it to get a very cool portamento effect on vocals or on instruments,” Binder continues. “When you get a nice R&B slide or slur in the vocal, Auto-Tune can enhance it and make it even smoother. I mean, it almost sounds calculated, like you can hear the algorithms processing as you do it, but that has become part of the vocal sound for a lot of singers now.”

But Binder also opines that it’s not the public that drives the use of auto-tuning, nor is it a lack of chopping and comping, in most cases. Rather, he says, it’s frequently producers striving for perfection, often thinking of radio performances. “But whatever the reason someone uses it, it’s not hard to tell that it’s being used,” he says. “For people who have heard a vocal track before it’s processed and then after, the difference is pretty apparent if you have reasonably decent ears. The new trick will be finding ways to use it and not have anyone notice.”

Uk engineer Donal Hodgson says, “I don’t believe there should be any limitations on the resources used to reach a great-sounding vocal, regardless of the singer’s ability. If the vocal performance isn’t cutting it, then get the toolbox out and fix it. Having said that, on the rare occasion I have been sent into the studio with someone who can’t sing, all the Auto-Tune in the world isn’t going to make them sound
 like a singer! I believe some talent is needed in the first place and then all the tricks can be added. I suppose this technology could be considered either as inducing apathy, or as a time saver — I have often fixed a vocal because it was quicker and easier than sending the singer back to the booth. I think it might be human nature.”

No Comments
Post a Comment