At Google, a blind engineer aims to make technology more accessible to people who can't look at a screen—or just don't feel like it

Selfishness, when done right, can be a huge motivator," says T. V. Raman, MS '92, PhD '94, wearing heavy black sunglasses as he sits in a small conference room at Google headquarters in Mountain View, California. Blind since glaucoma stole his eyesight as a teenager in his native India, Raman has dedicated himself to devising technology accessible to the visually impaired—from talking computers to touch-screen phones using aural and tactile feedback. "It isn't that often," he notes, "that you have the luxury of being both the producer and the consumer."

Because Raman could see for the first fourteen years of his life, visual images are imprinted in his brain—so, he explains, "you take everything you hear and map it to the visual world because it gives you a frame of reference." That understanding of both the sighted and sightless worlds drives Raman's work with Google's research division. Those efforts began in earnest during his graduate days on the Hill, where he earned a master's in computer science and a PhD in applied mathematics; 1990 was a pivotal year thanks to two important additions to his life, a guide dog and a talking PC. "I ran around like a fool saying I was going to build a robot guide dog," he recalls, until he realized that no guidance system would be able to handle an Ithaca winter.

Instead, Raman focused on expanding the possibilities of the talking PC. In the fall of 1990, he took a computer science course on algorithms whose instructor, Dexter Kozen, PhD 77, was writing a textbook. "Give me the files that you use to produce the printed version," Raman told him, "and I'll figure out a way to make my system read it." It was a tough challenge. "It is one thing to take a paragraph of text and send it to a speech synthesizer," he says. "It still sounds mechanical and robotic, but at least you can understand it. If you take a complex math equation, the problem gets significantly harder."

But this is a man who speaks eight languages, constructs intricate origami-like sculptures out of paper as he answers interview questions, and can solve a Braille Rubik's Cube in twenty-three seconds (a feat immortalized on YouTube). His solution for rendering electronic documents verbally—which he called Audio System for Technical Readings, or AsTeR—became his thesis and won the Association for Computing Machinery's Doctoral Dissertation Award. (In 2010, sixteen years to the day after his thesis defense, Raman released AsTeR as an open-source platform.) "I decided the thing I should work on going forward was, how do you encode electronic information so that you can do more than just look at it?" he recalls. "And in cases when you have good information encoded, how do you convey it effectively via an auditory medium?"

Before joining Google in 2005, Raman worked at Adobe Systems—where he helped to adapt a PDF format that could be read by screen readers—and then in advanced technology development at IBM, where he filed more than twenty patents in six years. Within a year of arriving in Mountain View, he developed a version of Google's search engine that ranks websites according to accessibility for the visually impaired and gives a slight preference to the ones that work well with auditory screen readers.

In his cubicle at Google, where his yellow Labrador guide dog (named Hubbell) is usually in residence, the forty-four-year-old Raman uses a highly customized system that he constructed himself; through wireless headphones, he listens to a screen reader that is calibrated to speak at nearly triple the speed of normal speech. Over the past couple of years, his work has been fueled by the revolution in mobile technology, leading to development of Project Eyes-Free, an open-source effort to add audio and tactile alternatives to applications that use Android (Google's cell phone operating system) with just a few lines of code. For instance, there are mini apps for audio feedback about date and time, GPS location, and battery and signal strength. In addition, users can launch and center a dial pad by placing a single finger on the touch screen, which fixes the site of the number five. Much the same process allows a user to navigate the phone's address book. Best of all, says Raman, "it goes with you; if you get a new phone, you sign in and it starts talking to you automatically. For comparison, similar tools have cost upwards of $500—and needed to be purchased and installed every time you got a new phone."

Raman is careful to say that his efforts are geared not only toward people who cannot see a computer or phone screen, but also to those who choose not to. Always, his aim is to create something for the mainstream that also happens to work for a niche population—i.e., the blind—because, he says, that will translate into a lower price point. He likes to tell the story of the time a store clerk tried to sell him an expensive specialized clock for the blind; instead, Raman purchased an answering machine that happened to announce the time. "When you sell anything to a captive audience, typically quality suffers," he says. "The acid test is whether you, as somebody who can see, would be willing to use it at those times when you're not interested in or capable of looking at the screen. If the answer is no, then there's a quality problem. I call that the 'threshold of indignation.' " Thus, when he tries to convince his fellow engineers to design products accessible to the visually impaired, he chooses his words carefully. Says Raman: "You have to say to them, 'Wow, that's really neat—I wish I could use it in the dark.'"