June 13, 2014

Building A Computer Program That Can Teach Itself Everything

While science fiction movies – especially those that predict an apocalyptic future where the machines have taken over – have portrayed computers that can learn and teach about everything as a dangerous tool, researchers at the University of Washington and the Allen Institute for Artificial Intelligence in Seattle suggest they could help people sift through volumes of information online and access everything there is about a specific topic.

"It is all about discovering associations between textual and visual data," Ali Farhadi, a UW assistant professor of computer science and engineering, said in a statement. "The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them."

Farhadi and his fellow researchers have created the first fully automated computer program that can teach just about everything there is to know about any visual concept. Dubbed 'Learning Everything about Anything', or LEVAN, the program is able to search through millions of books and images on the web, and from this it can "learn" all of the possible variations of a concept. It can further display the results to users as a comprehensive, browsable list of images, helping them explore and understand topics quickly in great detail, according to the team.

The program is able to "learn" and then determine those that are relevant by "looking" at the content of the images found online. It can identify characteristic patterns across the images by using object recognition algorithms. Unlike online image libraries, which typically are organized by the captions, this program can draw upon the content in the image and pixel arrangement.

The researchers noted users can browse the existing library, which has 161 concepts listed on the LEVAN website – but this is expected to grow with searches. If the concept one is looking for doesn't exist, the program can automatically begin generating an exhaustive list of sub-category images that relate to the concept. There are more than 64,000 subcategories and more than 45 million images that have been processed so far.

For example, said the team, a search for "dog" could bring up a fairly obvious collection of subcategories that might include "black dog," "swimming dog" and "greyhound dog," but also include "dog bowl," "hot dog" and "down dog" – the latter being a yoga pose.

The technique reportedly works by searching text in millions of books written in English and available on Google Books. The algorithm can then filter out words that aren't visual, and once it has learned which phrases are most relevant the program can be further trained to find relevant images accordingly.

“Major information resources such as dictionaries and encyclopedias are moving toward the direction of showing users visual information because it is easier to comprehend and much faster to browse through concepts. However, they have limited coverage as they are often manually curated. The new program needs no human supervision, and thus can automatically learn the visual knowledge for any concept,” added Santosh Divvala, a research scientist at the Allen Institute for Artificial Intelligence and an affiliate scientist at UW in computer science and engineering.

This is not the only computer program that could be learning through visualization.

Last year, researchers at Carnegie Mellon University announced the creation of a new computer program that can search the web 24 hours a day, seven days a week and from this can teach itself common sense. Dubbed the 'Never Ending Image Learner' (NEIL), it was designed to search for images and do its best to understand these images on its own. This is computationally intensive, and the program runs on two clusters of computers that include 200 processing cores.