One of the world’s oldest and most prestigious universities is offering a new study focus that sheds light on its progressive approach to academia. With a grant from the non-profit foundation, the Leverhulme Trust, academics at England’s Cambridge University will be able to study artificial intelligence ethics over the next ten years.

The research focus will be facilitated by the Leverhulme Centre for the Future of Intelligence, which will be established thanks to a $15 million grant by the Leverhulme foundation.

Working alongside Cambridge’s already influential Centre for the Study of Existential Risk (CSER), the artificial ethics center will work toward a common purpose to found responsible innovation and refresh contemporary perspectives on the affordances and threats of AI, according to Professor Huw Price, the university’s Bertrand Russell Professor of Philosophy, who will direct the Center for Future Intelligence as well as CSER.

Using memes from science fiction movies made decades ago – the case of 2001: A Space Odyssey – that was 50 years ago,” Professor Price told the Wall Street Journal. “Stanley Kubrick was a brilliant film director but we can do better than that now.”

Cambridge isn’t alone in recognizing and taking actions towards establishing an ethical study of AI. Via a $7 million contribution of Elon Musk to “keep AI robust and beneficial”, Cambridge, Massachusetts saw the birth of the Future of Life Instate last July. Meanwhile, Musk and other well-to-do and concerned investors committed $1 billion to OpenAI, a non-profit research initiative intended to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Corporate acquisitions of AI companies has also seemed a trend in the last years. Google, Facebook, Microsoft, and Apple have all recently invested in AI software that show promise in tasks from facial recognition to speech recognition and beyond.

Price and his Cambridge cronies will partner with researchers at the Oxford Martin School and the University of California, Berkley to combine the insights of software programmers and philosophers, and develop code that would govern the behavior of AI systems.

“As a species, we need a successful transition to an era in which we share the planet with high-level, non-biological intelligence,” Price told to the Wall Street Journal. “We don’t know how far away that is, but we can be pretty confident that it’s in our future. Our challenge is to make sure that goes well.”

All this interest in artificial intelligence seems sudden but it isn’t whimsical. 2015 was regarded as a “breakthrough year” for the development of AI. Cloud computing infrastructures have provided cheaper and more powerful means through which to run programs. Self-driving cars toured the world while an AI program learned to play old Atari games all by itself. Likewise, a motorcycle riding robot vouched to surpass us as a robotics company asked us to refrain from having sex with their product. Though AI systems have toyed with the silly and the serious in 2015, the Cambridge focus illustrates in academic weight the importance of considering the rights and wrongs behind artificial intelligence.

A new report issued this week by Stanford University gives a 2016 update on the 100-year study of AI, pioneered by Stanford alumnus and Microsoft Researcher Eric Horvitz in 2009 and co-led by Bioengineering and Computer Science Professor Russ Altman. The report reflected the significance of the new efforts being taken by top tech companies in setting standards for AI ethics. According to those involved in the creation of the industry partnership - which at present includes researchers from Alphabet, Amazon, Facebook, IBM, and Microsoft - the aims of the efforts are centered on ensuring that AI is developed to help and not hurt human beings. There is a hush-hush atmosphere currently surrounding the industry group, of which the name has not yet been announced, though inside sources describe the group as modeled on Global Network Initiative, a human-rights organization advocating for freedom of expression and privacy rights.

Venture Capitalist Vinod Khosla, the founder of Khosla Ventures, is more concerned about the threat of human manipulation of genes over potential haywire AI. Khosla made the comment in a Quora response, after he was asked about previous comments made be Tesla's Elon Musk. Khosla stated that both are tools that need to be managed, with the potential to bring both benefits and destruction. He noted the development of CRISPR, a technique now being used to edit animal and human genes. Chinese researchers are already using the method to alter genes in human babies. His concern relates to unregulated use, and the creation of more intelligent "designer humans" that are output a faster rate than those yielded by nature.

While computer science and AI has historically been dominated by men, the numbers of women entering these fields, according to The American Association of University Women, as been reduced from 37 percent in 1984 to just 18 percent today. Stanford's Fei-Fei Li sees this as a real problem, and not just for gender equality. She outlines three ultimate reasons why more diversity is needed in the field of AI: 1. Economics - we need a larger labor force to handle all of the work that needs and is being done in the area of AI today 2. Creativity - evidence shows that when people work in more diverse groups, they tend to produce more ingenious solutions 3. Fairness - teaching machines requires the knowledge and expertise of humans; a dominant demographic is likely to instill even unintentional biases when training machines on the massive data sets that are required. Li believes educators, business leaders, and others can help diversify the playing field by presenting AI in a humanistic light, as a potentially great tool that can be used to help serve society in a multitude of ways.

A new AI system out of MIT has successfully proven its ability to make inferences and produce handwritten characters that are mostly indistinguishable from that of a human's. The system is based on a computational structure known as a "probabilistic program", and is the thesis work of former PhD student Brenden Lake. Unlike more traditional computer programs that break a complicated system down to its most basic parts, a probabilistic program makes inferences to fill in gaps by analyzing a large array of examples. The system was subjected to a variety of tests, all of which asked the machine to produce characters in a writing system that were different by a matter of degrees than the character to which it had been first exposed. Humans took the same three tests. Judges of all three tasks, in both machine and human, identified the machine product 50% of the time, equatable to chance.

Stay Ahead of the Machine Learning Curve

At Emerj, we have the largest audience of AI-focused business readers online - join other industry leaders and receive our latest AI research, trends analysis, and interviews sent to your inbox weekly.