DeepMind Partners with UK to Streamline Health Services, Chinese Researchers Claim AI-Powered System can Spot Criminals, and More – This Week in Artificial Intelligence 11-25-16

Daniel Faggella is the founder and CEO at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and many global enterprises, Daniel is a sought-after expert on the competitive strategy implications of AI for business and government leaders.

A new app from Google’s DeepMind called Streams will provide access to patients’ histories and test results to hospitals in the UK. After signing a five-year contract with UK’s National Health Service, DeepMind is now privy to 1.6 million + patients’ healthcare information registered with one of Royal Free NHS Trust’s three London hospitals. The contract and app is a trial run into helping streamline the healthcare provider and delivery system, and DeepMind claims that the system could save over half-a-million hours per year for healthcare administrators. Critics have been quick to turn an eye on the large amount of data that DeepMind would otherwise not have access to, but an analysis of this initial foray may prove to be imperative as machine and deep learning enter the prized but sheltered healthcare sector.

Last week, Nvidia reported record quarterly sales of $2 billion, an increase of 54 percent from one year ago. Chief Executive Jen-Hsun Huang said:

“We had a breakout quarter – record revenue, record margins and record earnings were driven by strength across all product lines. Our new Pascal GPUs are fully ramped and enjoying great success in gaming, VR, self-driving cars and datacenter AI computing.”

Nvidia has recently provided its chipsets to big-name consumer product providers, like Nintendo (for its Switch game console graphics) and Microsoft (for the Surface Studio desktop computer), as well as GPU-enabled servers for Amazon Web Servers, Microsoft, IBM, and Alibaba. In addition to Tesla agreeing to install the Nvidia Drive PX 2 computers into its newer autonomous cars, Nvidia has also formed a collaborative research partnership in advanced self-driving technology with New York University’s pioneering deep learning team. Nvidia’s (and others’) success in this domain seems to speak to the emerging infrastructure of cloud-based infrastructure integrated with AI technologies.

Facebook Engineer Serkan Piantino, who helped found the company’s AI research lab with Computer Scientist Yann LeCun, is leaving Facebook to launch Top 1 Networks. The startup will provide access to Nvidia’s GPUs as a cloud-based service, similar to Amazon and other related services. In regards to what makes his company’s services better than existing provider’s, Piantino said:

“They don’t have the latest and greatest Pascal cards from Nvidia. Once you begin to get one generation behind, the performance decreases drastically.”

Top 1 Networks is still in very early stages, with Piantino as its sole employee and funding the venture from his own pockets. To date, he’s built his first hardware prototype and is in talks with a potential first group of paying customers.

4 – Minority Report-style AI Learns to Predict if People are Criminals from Their Facial Features

Two researchers from Jiao Tong University published a research paper last week that claims they’ve created an algorithm that can identify a convicted criminal based on facial features alone. Based on a set of 186 photos, the system performed with 90 percent accuracy; the system was trained on a database of more than 1,600 images, of whom half were convicted criminals. The implications of this research are obviously controversial, with some worried that China could add this technology to its AI-powered security initiatives (which already includes predictive policing). Dr Richard Tynan, a technologist at Privacy International, commented:

“It demonstrates the arbitrary and absurd correlations that algorithms, AI, and machine learning can find in tiny datasets. This is not the fault of these technologies but rather the danger of applying complex systems in inappropriate contexts.”

While Researchers Wu and Xiang claim that the algorithm discerns a higher degree of dissimilarity in the faces of criminals, there will likely need to be other research initiatives in this domain to justify validity.

This week, Google announced that its current Neural Machine Translation (GNMT) system has been extended to allow for translation between multiple languages, a step closer to scaling up to the initial 103 supported languages. Instead of changing the system to accommodate translation, GNMT uses a “token” at the beginning of the input sentence to specify the required language for translation. This method improves quality and also makes possible “zero-shot translation”, essentially allowing the system to translate between language pairs to which it has not yet been exposed. Google was able to look into the system as it performed its zero-shot translations to try and discern the underlying approach, and determined that GNMT “must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations”. The system is currently being used to provide services for 10 of its released 16 language pairs.

A recently published paper by independent researchers Federico Pistono and Roman Yampolskiy sheds light on the "unethical" side of developing smart and capable AI systems. Titled Unethical Research: How to Create a Malevolent Artificial Intelligence, attempts to identify conditions under which a harmful AI could occur. Some of the named "clear signs" (already exhibited by some of today's leading AI companies) include absence of an oversight board in development of an AI system, as well as existence of closed-source code and (paradoxically) AI developed via open-source software. Open-sourced technology is starting to gain more traction in the AI market; the nonprofit OpenAI, though its mission focuses on the development of AI to benefit all of humanity, is one of the greatest proponents of this type of AI development. Elon Musk, who pledged $1 billion to help fund OpenAI, also partly funded Dr. Yampolskiy's work on this paper. The question remains as to whether AI will be as closely regulated as malicious software has been in the field of Cybersecurity.

An article published this week in the journal Nature elaborates upon Google's big AI win against a top-ranked human player in the ancient game of Go. AI system AlphaGo beat French champion Fan Hui in a five-game match at Google DeepMind's London office in October. An encore match is scheduled to take place in March between AlphaGo and the world's top human Go competitor, Lee Sedol, in Seoul, South Korea. Though Go is said to be a simpler game than Chess, there are many more possible moves that can be made in any play, which poses a significant challenge for human and machine alike. The win is a breakthrough in Google's continued investment in deep learning, a branch of artificial intelligence that involves artificial neural networks in analyzing data.

Stay Ahead of the Machine Learning Curve

At Emerj, we have the largest audience of AI-focused business readers online - join other industry leaders and receive our latest AI research, trends analysis, and interviews sent to your inbox weekly.