Whether you are aware of it or not, we've all heard the use of Auto-Tune. Many commercially released songs are tweaked using this technology to "improve" the vocal performance.

A commonly referenced use of the tool is Cher's "Believe" track from over a decade ago. The un-natural sounding, almost robotic tone from the vocal undeniably made the track. It was an entirely new sound for commercial charts and made the ears prick up due to how unusual a sound it was.

Now, Auto-Tune is everywhere. The use of the tool as an effect has cascaded through the years from Cher, through to many "R&B" artists of today such as Usher, Kanye West and Lil Wayne. Smartphone owners can even download the I Am T-Pain app to get "that sound".

Locally I was at first surprised to hear Guy Sebastian using Auto-Tune for its effect on his recent release "Who's That Girl". I respect Guy as a very talented vocalist and songwriter - why would he need to correct his voice? His use of pitch-shifting is testament to the fact that it is now used to capture the modern R&B style in an otherwise already-quality recording.

However the effect is no longer creating a unique sound, but a generic sound to modern pop music. The ability to correct pitch to sit perfectly in tune has created a monotony to modern music. A single episode of Glee is proof enough of that (or if you need further reading material, Google the phrase "Glee autotune").

The issue seems to draw into question the reliance of this vocal correction tool to produce a good performance outcome, rather than recording the right performance (or performer) to start with. It's no longer being used to add a once-fresh flourish to a song - rather, many songs simply could not be made listenable without it.

But on the flip-side, I believe there is a value to Auto-Tune. It's hard to sum up my feelings on the matter any better than Recording Engineer Eric Valentine, discussing techniques he used in the recording of legendary guitarist Slash's solo album "Slash" (Audio Technology Magazine, Issue 76, August 2010, p40):

"I actually get more honest performances from singers when I capture them in a computer. I can edit their performance, for instance tune a really cool performance, where the emotion is exactly what we want but it's a tiny bit out of tune in some places. I'll only nudge things a bit to make sure it's not distracting, meanwhile definitely making sure everything keeps sounding like real human beings singing... It just allows me us to use really great, unreproducible but slightly flawed performances".

This I believe is the key. Use Auto-Tune sparingly - and only to a level so that any minor flaws in an otherwise powerful performance are removed so as not to be distracting in a recording. Use of Auto-Tune in LIVE performance though? Hell no!

After all, it's the human-ness of a performance we all want to see in the live environment, right? Some duff notes and on-stage personality are part of the reason we go to live shows. To err is human and to share that experience can provide a level of connection and intimacy between the performer and audience that can't be obtained with intent.

To err is human. To rely on a corrective tool for your success is unforgivable.