Computer scientists Kai Li (left) and Moses Charikar (right) are developing a Web search technology that searches for images or sounds using multimedia files, rather than words, as queries. (Photo by Brian Wilson)

What it does: Rather than using words to search the Internet for an image or audio file, the technology being developed by Charikar and Li uses multimedia files themselves as queries. In one application, a consumer eager to find a shirt similar to a beloved old favorite might upload a digital picture of the desired style to find matches at online retailers; in another, a musician might input a song to find unauthorized uses of a particular recording.

"When you search for text in a text document, the search engine looks for all the documents that contain those words in that order to generate the results," Charikar explained. "But when you're searching with images, the search engine needs to determine what the visual cues for matching should be. If you're trying to find an image of a particular breed of dog, should it search for a paw? An ear? A color?"

Because collections of multimedia files are massive, the algorithms used to search the data are designed to process summaries, such as thumbnails, of each given file, rather than analyzing each individual file in its entirety. For example, the software identifies distinct regions of the file used as a query, such as distinct portions of an image or musical phrases used in a song, and then searches for other files that share similar characteristics.

Collaborators: Graduate student Wei Dong and Zhe Wang, a search engine programmer in the Department of Computer Science who earned his Ph.D. from Princeton in 2010.
Commercialization status: A patent is pending.