So begins the exchange between actor Kevin James and the automated phone system in the 2007 film "I Now Pronounce You Chuck and Larry" -- a scene that correctly assumes moviegoers have had personal experience with the absurdity of non-human customer service.

Who hasn't? We've all waited through the recitation of menu items, none of which were related to our actual question. We've all hit zero repeatedly, hoping to be transferred to a real person. When that didn't work, maybe we even lost our temper, shouting "Representative!" over and over, whether the robot had given us that option or not.

Why hasn't there been a mass revolt against automated systems? The answer is simple: We believe that this nonsense is temporary. We believe that computers are on the cusp of being able to understand human language. And that belief, according to many linguists and cognitive scientists, is completely wrong.

First, there's the problem of voice recognition itself. Julie Sedivy, a professor of linguistics and psychology at the University of Calgary, told me that simply recognizing speech sounds and matching them up with specific words is much more complicated than most people realize.

David Wheeler

"The way I say 'dog' will depend on my age, gender, geographic dialect, the particular anatomy of my vocal tract, and how quickly or formally I'm speaking," she said. "Humans are able to calibrate their perception after hearing just a couple of seconds of someone's speech, but really good speech recognition is still a problem for many programs."

Although this technology has steadily improved for the past 20 years, "speech recognition systems are still markedly inferior to human beings in understanding spoken language," said John Nerbonne, a linguist and information sciences professor based in The Netherlands. "Telephones are a particularly difficult medium because they limit the signal a good deal."

Furthermore, studies show that people almost universally hate automated phone systems, and that most customers are even willing to pay more to speak to an actual person.

"Ally Bank, Discover Card and TD Bank all have ads on television right now that brag about the fact that if you call the phone number, a real live person will answer," said Adam Goldkamp, spokesperson for GetHuman, an organization dedicated to improving customer service. Such is the sad state of the current customer service world. In other words, only now are companies starting to wake up to the lost revenue potential of frustrated customers who give up on the automated system and take their business elsewhere.

"When you see companies launching ads like these, it shows they understand that there are things they can do to increase their future revenue by giving customers what they want: an actual person to speak to when they have an issue," Goldkamp said.

Maybe one day in the future, automated systems will be able to identify our words with perfect accuracy. Even then, there is still an insurmountable problem: the ability to understand what we mean by those words.

"We're a long way from being able to communicate with computers in real language," said Suzanne Kemmer, director of Cognitive Sciences and associate professor of Linguistics at Rice University. "Human language has a powerful design feature that works great for normal person-to-person interactions, but is completely at odds with the way computers work."

JUST WATCHED

2011 Siri: Apple's new voice recognition

MUST WATCH

2011 Siri: Apple's new voice recognition02:13

Computers are based on formal logic and fixed categories, she explained. Human language is flexible and dynamic, and follows a cognitive logic that differs fundamentally from computers. In short, human words and grammatical structures don't have fixed meanings. Instead, they have a certain amount of vagueness and ambiguity built in, so that their meaning is highly affected by context.

Actually understanding meaning is a very different problem from voice recognition or from the auto-correct on your computer or phone, Kemmer said.

When I brought up this topic with Harvard professor Steven Pinker, one of the world's most influential linguists, he noted that major companies, by looking for statistical patterns in large datasets and applying them to user input, have largely dropped the ball when it comes to real artificial intelligence: "The stupidity of a lot of computer language understanding systems comes from the fact that they've turned their backs on genuine intelligence and satisfied themselves with statistics."

In other words, computers are still very bad at trying to guess what we mean when we say something.

They also don't get our social and emotional psychology. "Often the automated phone systems were developed with a tin ear to the way people interact with each other," Pinker told me. "They sound like people, but if you think of them as such, they are the most infuriating people in the world. When I hit '0' to get a human being, and a voice dripping with a combination of mock concern and mock confusion says, 'I'm sorry, but I did not understand your answer,' I am apt to go into a rage."

"If this were a real person," Pinker added, "she would be simultaneously stupid, mendacious, and condescending."

It's time to stop the madness. We are not on the cusp of inventing computers that understand human language. Silicon Valley can, and will, continue to strive for this goal.

In the meantime, let's stop kidding ourselves. Let's admit that computers, by themselves, are terrible at customer service. Let's admit that, at a time of economic uncertainty and job losses, we should be supporting companies that employ real people to answer our questions. Let's admit that, unless we demand change, we will be forced, forever, to deal with an automated system that thinks our name is Barry Shmalenpine.