Baidu’s Artificial-Intelligence Supercomputer Beats Google at Image RecognitionA supercomputer specialized for the machine-learning technique known as deep learning could help software understand us better.

Researchers at Google have created software that can use complete sentences to accurately describe scenes shown in photos—a significant advance in the field of computer vision. When shown a photo of a game of ultimate Frisbee, for example, the software responded with the description “A group of young people playing a game of frisbee.” The software can even count, giving answers such as “Two pizzas sitting on top of a stove top oven.”

Many computer vision projects struggle to mimic what people can achieve, but Microsoft Research thinks that its technology might have already trumped humanity... to a degree, that is. The company has published results showing that its neural network technology made fewer mistakes recognizing objects than humans in an ImageNet challenge, slipping up on 4.94 percent of pictures versus 5.1 percent for humans. One of the keys was a "parametric rectified linear unit" function (try saying that three times fast) that improves accuracy without any real hit to processing performance.

You aren't about to get many keen-sighted artificial intelligences just yet. Microsoft is quick to note that its vision system (like others) excels in tests like these, where there are subtle distinctions that flesh-and-bone observers can't always see. Computers are more likely to goof up with simpler recognition tasks, like identifying barnyard animals. Still, it's noteworthy that software emerged victorious in the first place.

The Google Brain Residency Program is a 12-month role designed to jumpstart your career in deep learning. Residents will work with world-famous scientists from the Google Brain Team. The goal of the residency program is to help the residents become productive and successful deep learning researchers.

The residency program is similar to spending a year in a top master's or Ph.D. program in deep learning. The residents are expected to read papers, work on research projects, and publish their work in top-tier venues. By the end of the program, residents are expected to gain significant research experience in deep learning.

The environment in the Google Brain Team is uniquely conducive to outstanding deep learning research: the Brain Team has world-class large scale infrastructure for training deep neural networks, large datasets in a wide variety of domains, and a high concentration of extremely strong deep learning scientists. It is an environment where pure research and important applications reinforce and support each other in a virtuous cycle. To get a taste for their work, please see the Brain publications list.

The ideal candidate has a bachelor’s degree or a graduate degree (e.g. M.S. or Ph.D.), preferably in computer science, mathematics, or statistics. The application would present evidence of proficiency in programming and in the prerequisite courses. The evidence could take the form of a transcript, letters of recommendation, notable performance in competitions, or links to an open-source project that demonstrates programming and mathematical ability.

In addition to the items listed above, candidates would possess a strong interest in the field, a commitment to detail, and a passion for deep learning. Examples include a link to a publication, a blog post, or an implementation of a (even slightly) novel learning algorithm, with an explanation of what is interesting about the algorithm and its performance.

The Google Brain Residency Program is based in Mountain View, California, and residents are expected to work on-site.

Computer researchers have alleged that artificial-intelligence have advanced a lot and have beaten human capabilities for a thin series of vision-related tasks.

The developments are notable as the so-called machine-vision systems are turning out to be commonplace in various aspects of life. These involves systems for car-safety that senses pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

Experts at the Massachusetts Institute of Technology, New York University and the University of Toronto recounted a novel type of “one shot” machine learning on Thursday in the journal Science, where a computer vision program outclassed a team of humans in classifying handwritten characters based on a single example.

The program was able to learn the characters in a short period of time amid a range of languages and generalizing from what it has acknowledged. The authors advocated the aptitude as identical to the way humans learn and understand concepts.

The new tactic, best known as Bayesian Program Learning, or B.P.L., is different from the existing machine learning technologies recognized as deep neural networks.

Neural networks can be drilled to distinguish human speech, spot objects in images or recognize various kinds of behavior by being exposed to massive sets of examples.

Even though such networks are sculpted after the behavior of biological neurons, they do not yet understand the way humans do — obtaining new concepts swiftly. As compared, the new software program elaborated in the Science article has the ability to learn and recognize handwritten characters after “seeing” only a few or even a single example.