You have misunderstood what the writer means to convey. Follow this link, its a simple demonstrations of what is meant by "loudness" in this article. This link was posted by deadmau5 to clarify the same issue:

"the same pieces of music" are in fact several thousands pieces of work, more or less complex, possibly not for everyone's taste, but hardly all the same stuff... They have of course been played for hundreds of years, but they're still different pieces of music.

The issue is to define whether an album Lady Gaga or Aguilera, played in your ears for a 100 years, even with interpretational efforts and alternate takes, would amount to a type of torture or not. I’m for the “it is torture” interpretation.

I'd argue that the 9 Symphonies, or Rachmaninoff’s concerts for piano and orchestra, or Mingus Ah Um, or Zappa’s Freak Out, include episodes of music that are each different from the next. Even more recent albums such as Sting’s “Nothing Like the Sun”, while broadly defined as pop music, carry interesting variations in styles, musical flavors and rhythms and certainly try, at least, to look like they are an artistic effort. It looks like "making an effort" is not really welcome by music industry nowadays.

Mainstream music business is worse off. Luckily, many interesting musicians are finding (sometimes underpaid) options through so-called “new technologies” that are 15-year old. If it were for mainstream radio, TV and commercial internet distribution, you would think that contemporary music is uninventive, shallow and luckily in-tune (most of the times) singing, carrying stupid lyrics or wish-I-were-provocative (jeez, more than 30 years after the White Duke, really??) meanings, and mostly sung by a combination of half-naked black or blonde ladies moving their heads like stereotyped ancient Egyptians (this right-&-left head movement keeping your shoulders still, best served with a characteristic "no" sign with your index finger), and mostly bare-chested well-built guys (and at times you don’t even get that last bit) obviously being in the middle of some threesome affair and, why on earth, dancing rather than get on with it... and why not, add in a bit of breakdance 'cause it's so new in a video. It’s not even dull, dull is Vangelis.

Thank you for putting into words what I've been trying to explain to friends without knowing the technical terminology to back up my hypotheses. I believe I have a critical ear that some others may not possess, as most of my counterparts do not get to what irritating sound I am referring. I hear digital music as flat and extremely harsh with little pleasing tonal quality and melody. To me, it all sounds the same. Edgy and harsh and boring. I love all types of music, but do not listen to any current music as it is extremely annoying to my senses. Garth Brooks, during a recent interview with Larry King, talked about the same thing and also that it was one of the reason's he has not recorded for a while. It appears he prefers the old recording methods to the "supposedly" better digital sound. We have had the best, so why do the big ego guys think they can make it better. Will they ever figure out what us old fogies have known all along?
nb

Maybe, but it's not a total non sequitur. His point is that digital sampling techniques miss nuances that the best analog recordings capture, and that complexity is lost as a result. The point of the article is that music is being dumbed down, in effect if not in intent. I would go further. An orange contains literally hundreds of flavor elements. An orange TicTac, say, might contain only one or two proxies. People schooled in MIDI are like chefs who have learned their trade using only artificial chemicals. And a whole generation of listeners has grown up not knowing the difference. Are you one of them?

I find it difficult to believe that 44.1kHz is insufficient. There are some humans that can hear 21kHz signals easily, but they tend to be young (high frequency sensitivity is the first to go). Of course, that is beside the point. Analog audio equipment has much less bandwidth than digital. In fact, normal CDs (ie, not SACD or HD) have more bandwidth than most equipment can record as it is. Nonlinearities in even half decent equipment are negligible. If the best analog recordings are better than the average digital one, the difference is in the mastering, not the medium.

As an electronics engineer, I once took an interest in audio, but confess I have not noticed any innovations in sound reproduction in the past ten years or so. Anyway, I suspect your complaint about digital reproduction has less to do with the sampling rate than with the quality (including bandwidth, phase distortion, noise suppression and range) of the analog end of amplifiers. Moreover, it is extremely unlikely that recording engineers compromise on musical fidelity just because of digital mastering. One assumes you are always using superior speakers, too. If your sensitivity is enough to distinguish and detect the noise and artifacts associated with D-A conversion, you must have been intensely irritated by the sound of the obsolete plastic discs, even using excellent pickup heads.

On the converse, the monotony of sampling has not gone unnoticed by the music community. Many electronic artists have learned to experiment with digital sampling in a manner that is pioneering and innovative. The music of Gold Panda and Dan Deacon - for brief example - have been able to push the limitations of digital music into new realm by playing off the stereotypes of electronic music and using the bounds of the electronic genre to create new and challenging music. Good new music does exist, whether it is in the electronic, jazz, classical, or pop genres's; one must simply look for it.

Right,
I don't think tech has anything to do with dull. It's the creative process at the source that is unintresting, and you are left to overdose digital enancement (from normalization processes to FX of any sort - like that creepy robotic voice that has been so fashionable for a while now) to bring some quickly fading interest to a song that is just, well, bad.

Instead, write good music and hire good musicians, digital technologies can really make justice to the quality of the musical work and enhance the listening experience.

"9 dB Louder"...that's only the way the pop.records are produced nowadays...it just means that today's fashion while producing relies on over-compressing the tracks and the master. Nothing else. But that just happens because we have the material to do it (they would have done it before if they had the opportunity), and because of the music bussines, more regulated than ever before.
This also relies on the amplifier and monitoring systems (unexistent for a wider audience during the 50's) that have beeing optimized in power during the last fifty years.
For sake of doctor Serrá, he is completely wrong in some aspects. The music of today is not based on the same ten chords....not even the Pop music (depending of what he considers Pop).Arguing with the example of Jimi Hendrix is a clear example of ignorance...there was a time where everything was better is just a simple misstake (and a plain one). The only way he is raght is that up to today's satndart, we have more and more music that seems the same, but only because the musicians have better chances of producing themselves and buying material than before.
There is much more that those ten chords Mr. Serrá, and much more than Stockhausen could have possibly imagined.

Learn how tow frickin spell. And you are absolutely, entirely, completely WRONG.

I Produce for a living, and am quite well at the same with multiple years of theory under my belt, and have a production company with over 300 million records sold to its credit.

Over Compressing tracks and especially their Master would result in a 'smashed, thin sounding song missing it's low end,' and has nothing to do with it's db; to wit loudness and compression are two entirely different aspects with reference to the final db of any master.

Apologies for the few grammar misstakes in the last comment, I am not a native speaker.
But I am a half professional musician who's father is an aknowledged Sound-engineer/technician, and what?
Misleading is a good one...
I dont want to get into conversation about how most compressors boost the signal given and increases the logarithmic perception in our ears,
Mr. "300millionrecordssold" (even if you are right, loudness and compression are not the same..good bet Mr.Watson).
I am not gonna get into discussions about those things.
Fact is, It is the thruth.
You seem to be one of those guys who thinks about how many records he has sold yet..and not about doing the job in the best possible way.
Today we have better options of recording, producing and what ever we want. The music of today is more interesting than decades ago!
The music bussines is changing (bad luck for you..good luck for me)!

Learn how to frickin' record. If you don't believe that popular music is far more compressed now than it was 20 years ago, you don't have a clue. Dump, say, the first Rage Against the Machine album (original mastering) into a 2 track editor, then dump in something like the second Disturbed album (I've done this, btw). What you will immediately notice is that the RATM record has dynamics and the Disturbed record is squashed and brick wall limited (you can tell by the insane amount of clipping on the waveforms).

By the way, you may be a producer but you don't know the first thing about audio engineering. When you compress something, the apparent level of the BASS goes up, not the treble. Of course, you can just throw a good EQ in after it (like a GML or old Focusrite) and "fix" that. The fact that you don't know something this basic leads me to believe that you're full of crap on your claims of producing records that have sold millions. That, or the records you produce are horrible. Nickleback sells a lot of records, but they're still quite horrible.

I used to be in the industry, worked as a recording engineer and producer. I've recorded classical, jazz, rock, punk, rap, pretty much everything and actually know what I'm talking about.

"9 decibels louder" is a meaningless measurement. Measurements of sound volume require a reference level. Also, loudness is an average of measurement over time. The most common way of measuring loudness is RMS, though when measuring sound pressure in the field, there are different measurement methodologies.

It may be asking a lot for reporters writing on deadline to get things right, but this is an example of using a quantity not understood by most people without any context, which is useless.

A difference in average sound pressure level of 9 dB does have real meaning: it means that music today is almost twice as loud as music in the 50's. And the frequency profile used for sound measurements already accounts for any weighted averaging (i.e. root-mean-squared) required to make a measurement meaningful.

I am an audio electronics engineer and I deal with dBSPL, dBFS, dBrA, dBV, and many other audio-related decibel units on a daily basis, so I have to jump in here to correct you both. I'm not trying to one-up you or insult you or anything - I just want to correct the inaccuracies here.

Kent Williams, you're correct when you say that the decibel unit is meaningless when used without a reference. However, dB RMS is equally as meaningless. RMS is simply a way to calculate the average level of a signal over time (which contrasts with a peak measurement). It does not provide a reference.

In order to reference the measurement, you need to use a reference measurement lik dBV RMS, dBSPL RMS, dBOV RMS, or dBFS RMS.

The author is talking about recording media, so dBSPL, which can only be used in an acoustic system, has no meaning here.

Scotland777, as I mentioned above, dBSPL (sound pressure level) actually has no meaning with regard to audio recordings. dBSPL is a measurement of sound pressure in an actual acoustic listening environment, which means it relies on the speakers, the distance from the listener, the power of the signal through the speaker, and the air through which the acoustic signal propagates. It has no relation to the recording medium.

The measurement referred to in the article is actually dBFS (decibels full scale) in the case of CDs. 0 dBFS (peak) refers to the absolute maximum level that can be represented in the digital recording medium. In analog recording media, the maximum level is 0 dBOV (peak), which refers to the maximum analog level that can be reached without clipping in the electrical domain.

In both cases (analog and digital recording media), what defines "loudness" is the ratio between the average (RMS) level and the peak level of the recorded signal. It's true that this ratio has increased for modern recordings, which conversely decreases the dynamic range of the recording. Metallica's "Death Magnetic" album has one of the lowest dynamic ranges of all time. This is what is commonly referred to as the "loudness war."

I guess you can't blame the author (or the public) for not having a solid understanding of a pretty complex technical concept.

In any case, the author said that the recordings are 9 dB louder. This refers to the crest factor (peak/RMS), which is a power ratio, and our ears distinguish loudness as a power ratio, not a linear ratio. 9 dB equates to a power ratio of 7.9. Basically, it means that music has gotten 7.9 times "louder" in the past 50 years. Another way to think of it is that the dynamic range has decreased by a whopping 88%.

Dr Serrà's article, and the underlying research, are both nearly harmless nonsense.

While there is plenty of evidence indicating that equal loudness has increased significantly since computers began mixing music (q.v. Fletcher-Munson curves), there’s zero evidence that a Fourier transform can extract artistic qualities such as "timbre" from a public domain recording.

Just reliably picking out a melody from a recording, via an automated process, is next to impossible. One recent academic paper was overjoyed to report ~60% accuracy in this regard, and that’s only for a single melody line. Good luck acquiring meaningful "pitch" and "timbre" data for half a million recordings of every genre, recorded with every conceivable combination of equipment, with every conceivable frequency response.

Given this highly randomized data set, the authors claim to have found "codewords" which are representative of musical styles and periods. The authors acknowledge the novelty of the codeword idea, and have provided no evidence that codewords actually exist, or provide any indication that codewords are useful for psychoacoustic analysis. No composer living or dead has ever considered a codeword while writing a song.

Unfortunately, this article is only mostly harmless. Friedrich Schenker’s school of tonal analysis tried expressly to prove the superiority of German music through complicated tonal analysis. Likewise, the author’s claims of "blockage" and "no-evolution" of modern music should be interpreted entirely as the opinions of the authors and not given any further countenance.

If you think that today's music hasn't narrowed in timbre and its attendant recording techniques have been essentially dulled over the last few decades, you are severely out of touch with current popular music and its history. The fact is, whether or not the science is spot-on, this article reflects the current reality of pop music.

It, however, neglects to take into account things like the rise of minimalism as an influence on dance music (from which much of modern pop is derived), changes in recording technology (that it's gotten louder is a big "duh" - yes, the loudness wars have destroyed a lot of DR but music in 1975 was already singificantly louder than 1955,before the loudness wars kicked off).

Also, the timbral thing is a mystery. You cannot tell me that somehow there were more unique timbres in late-60's guitar rock than there were when pop music embraced synthesizers in the late 70's and 80's.

First, it does not make any sense to say that music is now 9 decibels louder than before. The volume of a song is dependent not just on the recording but the volume the listener plays the music at. If a record company increases the volume, the listener can simply reduce it on his end.

Second, volume levels have lost variation because an increasing proportion of music listeners do so in settings with much ambient noise (e.g. subways, cars or airplanes). One of the biggest challenges of a sound engineer today is to keep the decibel level consistent enough so that if someone were to listen to the song in their car, the volume won't vary so much that they would have to increase their volume at their soft parts (where the music is easily drowned out by ambient noise) and then turn it down at the loud. Instead, sound engineers strive to maintain decibel levels relatively consistent, but allow variation in texture to achieve variation in PERCEIVED loudness which is the more important aspect. Hence, focusing on decibel levels is almost certainly going to overstate the degree of homogeneity of modern music.

No no, they're talking about the level in which it was recorded to tape (er, to hard drive). And compression plays a large role in this. Think of it like mountains and valleys ... the mountains in most music are the same height, but the valleys, even 10 years ago, were lower ... so there was more dynamic range between, say, a verse and a chorus in the same song. Today, they're slamming everything so hard that they have to compress the tracks harder, thus the valleys end up being just as tall as the mountains. What does this mean? There is less and less range in any given song today ... the verses, choruses, the singing ... it's all at a continuous volume. THEN, when it's played on the radio, it gets compressed even more. You should read up on the "volume wars" some time, kind of interesting. Anyway, sorry for so many words, but the author of this article didn't go into the technical aspect as much as they should have to fully explain what it means. Hope I did a reasonable job with the whole mountains and valley thing.

The article is as the article says (as Forrest Gump might say). They are "talking about" loudness. There is no mention of recording levels or crest factor (peak/RMS) or power ratios or dynamic ranges.

The article clearly states in no uncertain terms "songs are, on average, 9 dB louder" than during the 50's. This is hogwash since everyone knows that "loudness" depends on one thing: the level of the volume on the playback device.

The original article might hold more details but that doesn't mean that the Economist article is not misleading.

lol, they're talking about loudness when they mean something else but it works for the layman. throw a song from the 80's into a 2 track editor (original mastering, not remastered), then something from the last 10 years and compare the waveforms. What you will notice is a LOT more compression (loss of dynamic range) and that the average level of the recording is higher relative to "0dB" (max level in digital). You'll probably notice a lot of clipping in the new recording, none in the old one. When you compress something (or "squash" it) the dynamics are reduced, which means you can increase the overall level printed to tape/disk/whatever. Analog does a bit of this already, which is a big part of why analog sounds different than digital (all the EQ applied and they way tape distorts would be the other main components).

Another way of putting it is, for the same amount of gain after the source (CD player, music server, whatever) a newer record will be louder on average (9dB according to the article).

Up until the mid 1990s, outstanding musicians were paid very well, so a lot of studio-recorded music was of very high quality. Now that the value of recorded music has fallen (because youngsters don't pay for it, or spend their money on other things (e.g. video games)), music is not attracting the same quality of people.

There is still a lot of very good music being made - but now that modern electronic keyboards make it so easy to make "music", it takes a lot of time to sort it from the dross in the long tail. There may be internet radio stations that take the time and trouble to select good new music (good luck finding one!), but the days when you'd hear lots of good new music on free radio, select what you want from among it, and then confidently go out and buy the album have gone.

Our best (only?) hope is the creation of applications that work out your musical taste and then create music for you on the fly.

We do spend our money on music, but not in traditional ways. We don't spend our money on plastic CD's or MP3 files from iTunes, but we do spend our money going to live shows and festivals.
This summer I paid $300 for an admission ticket to a three day festival featuring over 300+ electronic dance music artists.
There is plenty of good music being created, but it is not being played on the radio.

uh, musicians (not studio musicians, but musicians who write/play their own music) don't make their money from record sales. They make their money from touring. If a band can consistently sell 1,500 tickets per show, and get $10/ticket going to them, that's $15,000 per show. 100+ times per year and each musician should be pulling in over $100k/year, and that's for a modestly successful band. If you can do an arena tour, you'll make $millions. That doesn't even take merchandise into consideration.

In fact, the current recording label set-up is dying. If the labels were smart (they're not) they would just give the music away and focus on artist development and concert promotion. Give the music away for free and you'll get a lot more people to give it a listen, and if it's good they'll pay to see the band live (where the money is actually made).

As a matter of fact, music got lighter!
All thanks to the spread of compressed digital sound such as MP3, Dolby Digital, DTS, ipod, degital TV, digital radio, etc.

Sound compression starts from narrowing volume variations. In no way, this will make sound louder! I don't see how this got to do with artificial intelligence. I don't see how artificial intelligence can tell the difference of music.

sikko, dynamic range compression refers to an audio processing technique that alters the average signal level with regards to the peak. It started out as a purely analog technology back in the 40s with vacuum tubes, and it was used extensively by the Beatles, Hendrix, and yes, even Deep Purple.
The human ear perceives "loudness" as the ratio between the average signal level and the peak level. By increasing this ratio, all other things held equal, your brain will tell you that the sound you're hearing is louder. This is not a subjective test - it is simply the way that the brain works. Side-by-side listening tests have told record companies that people prefer "louder" music, so mastering engineers have slowly dialed up the ratio over time. This is commonly referred to as the "loudness war."
You seem to be confusing this concept with digital file storage compression (actually, to be precise, "psychoacoustic compression algorithms", such as MP3), which decrease the size of an audio file and can introduce certain audible artifacts as a result.
These are completely separate concepts.
For further reading on this topic, please check the "Psychoacoustics", "Loudness war", "Dynamic range compression", and "Data compression" articles on Wikipedia.

As Brett said, they're not talking about data compression but dynamic range compression. when you squash everything down to a few dB of dynamics, you can then raise the overall level by a lot and therefore make it louder.

This study sounds reasonable on its face but I am guessing that it takes the wrong approach to interpreting pop music. It assumes that harmonic complexity is the sole basis for judging whether a pop song is interesting or not and Dr. Serra is imposing his own values on genres that have given up on harmonic complexity as a area worth exploring. How many flat 13th chords can you have anyways?

I cite Aaliyah's "Are You That Somebody?" (produced by R&B genius Timbaland) as a initial case to show that Dr. Serra's assumptions may not capture the interesting aspects of modern pop music. Does Serra realize that Timbaland takes a page from musique concrete and actually imbeds a sample of a baby gurgling with glee into the song itself?

Essentially, Dr. Serra is way behind the times--the pop composers are exploring different areas of rhythmic and sonic complexity that he does not even ken let alone measure. Judging from your article, I bet his efforts to explore timbre are not nearly as mature as his tools to explore sonic complexity, given the amount of sheer sonic wonder that you can find in the best pop music.

Huh... a baby making gurgling sounds? This is innovative? I think you need to uncork Pink Floyd's "Dark Side of the Moon" and listen from beginning to end, and you will find a remarkable use of spoken voices and sounds of daily life interwoven into the songs as well as in the transitions between songs. Great stuff.

Simon and Garfunkle also had a great album "Bookends", which had a track "Voices of Old People". Side one of the album is about the phases of aging, and the track fits in nicely with the theme (although side two has the more memorable songs on it).

I love it when folks talk about "innovation" before realizing that some artists did the exact same thing 30 or 40 years earlier.

You're spot on about "Dark Side." I'm honestly not a fan of that album so I don't think I even knew they used found sounds on that album. Also, I would not be surprised if the Beatles used similar approaches when they experimented in tape.

However, what is unique about "Are You That Somebody" (and if I recall correctly, "Dark Side," is that the found sounds are part and parcel of the songs themselves. They are blazing an unusual trail for many musicians employ this technique in mainstream popular music for the following reasons:

1) Until the Akai sampler came into play in the late 80s, sampling found sounds was very complex and either required lots equipment, lots of time, or both. Floyd had all of these things when they did "Dark Side" and only tape-splicing obsessives like John Oswald could offset the cost by sheer dint of effort (and many slashed fingers I presume).

2) The idea of found sounds is still rather unusual in mainstream pop music, even today. I used the Aaliyah track as an extreme example of sonic creativity to make a point. The mega-popularity "Dark Side" is an obvious fluke partly generated by the band's cult following and the pot-influenced legends that sprung up around it.

Even though you are correct about "Dark Side," that does not make "Are You That Somebody" any less unique to the history surrounding use of found sound in pop music.

By comparing the whole database, I think we're missing an important point. A listener of the 50s--whether the 1750s or the 1950s--had access to only a very small percentage of the music produced.

With the digital world, access to music, and types of music, is much greater. As a college student in the 1970s, I struggled to afford the $6-$8 for a single new vinyl album. And these albums were mostly rock (I was fortunate to have room-mates who purchased jazz and classical).

Today, I have most of these vinyl albums in digital. In addition, I have hundreds of other "albums" in digital....in genres that wouldn't have been commonly available or noted in the limited analog age.

Painted with a broad brush the 1950s--or the hits we fondly latch onto--are interesting. Listen to all of the songs on a regular basis, and they also start to sound the same.

And lets not forget to discuss the effect that Apple and Steve Jobs have had in promoting horrible fidelity and music reproduction by choosing and pushing the horrid mp3 format. For this alone, Mr. Jobs can be referred to as the Ray Kroc of the digital age....

Apple uses the AAC format not MP3. Apple and Steve Jobs did not promote "horrible fidelity". They are providing what people want.

Most people are not like you and me. They cannot tell the difference between the sonic quality of a vinyl record, CD, DVD-Audio, MP3, or AAC. The difference is the quality of audio formats is negligible if one is listening to music in his car or through ear buds.

To the overwhelming majority of people, as long as it sounds at least as good as the radio in their car, they are satisfied.

I would say listening to it in my car is where I notice the quality the most, listening on computer can't really tell a lot of difference between flac and 320kbps but in the car I notice it since I have more speakers and much better quality audio gear.

I can notice a difference between 128kbps and 256kbps or higher, but anything more than 256kbps is not noticeable to me at all. I rip all my music at the lowest (i.e. highest quality) compression rate, since storage capacity is now, for all practical purposes, nearly infinite when it comes to digital music (I have a 1 TB hard drive that cost me $70). Even my smartphone can store nearly half of my music collection.

Actually, just a little factoid for you, but MP3 and AAC are simply two sides of the same coin. They are both lossy, psychoacoustic audio compression algorithms that are largely based on the exact same theoretical backgrounds and technologies. They operate on the principle of removing audio data that the listener's brain will not notice is missing, based on a psychoacoustic model. AAC was developed more recently, so it has the advantage of a better compression efficiency thanks to some updated techniques.

Still, it's really just a question of file size rather than quality. AAC is not "better" in a strict definition, it just achieves similar audio quality at lower file sizes.

In other words, if you encoded MP3 at a 3x higher bitrate than AAC, then the MP3 will actually sound much better. AAC generally wins against MP3 when the file sizes are constrained.

Hope that makes sense.

For more reading, check the Wikipedia articles for "Advanced Audio Coding" and "MP3".

Music is relative and subjective. Radio-tunes perhaps have gained in volume, but other genres have explored new ideas. The research is overgeneralizing. Perhaps the research falls into an expectation bias of the researcher.
One needs to be reminded that modern pop music is rebellious, relative to classical music, and is intended to create a large market segment, if not already targeting a niche within that. As a result, it has to be short and catchy. But that's only modern pop.
There are more musicians today than ever, plus more musical collaborations.
What defines the decibel a music's range should be?

I agree...modern pop is accessible. I am simply pointing out that most modern pop personifies rebellion as a way to entertain - hence, the loud makeup and noise, the simplified chord progressions, the over usage of digital instrumentation, all in order to stand out (to the average consumer).

A listener does not have to be an 'active' listener to enjoy pop music. Whereas to enjoy certain jazz, a listener has to actively find the melody of the solo within the tune.

Besides, I am not suggesting that mainstream music is bad. Again, music is subjective.

It follows from the high level of civil liberties and material wealth nowadays. The kids have it good, today! In certain kinds of Heavy Metal, there is a borrowing of "devices" from the classical era, like semi-operatic vocals, choirs and instruments like the harpsichord. They often use instrumental "songs" which could be seen as an electrified "allegro" or "presto" movement.

This is somewhat off topic, but related. As I move through my Wrinkly years I am less able to hear higher frequencies. I have that annoying "ringing" in my ears. I listen to lots of music, primarily classical, Broadway shows and The Stones. Does anyone make headphones that can cancel out the "ringing" and permit me to hear the high notes?

Editor: please feel free to delete this if you feel it is too far off topic.

A normal ear picks up physical vibrations of air your inner ear (through hair cells) translates that into electrical signals. Those electrical signals are sent to your brain which process the info and lets you hear the interpretation of that original vibration as sound or noise.
Tinnitus is a neurological phenomena it is not caused by air vibrations. It is caused from a aberrant signal from the hair cells in your inner ear or some connection on the way to your auditory processing centers.
Noise cancelling headphones work on the principle of inverse waves to actually stop matter from vibrating. Therefore tinnitus cannot be cancelled out by noise cancelling headphones since they work at a level of air molecules not on your inner ear hair cells (not cells that grow hair in your ear- these are organs responsible for transfer of motion into electrical signal).