Europe’s plan to catch up to the United States and China in an artificial intelligence (AI) arms race is coming into focus. The European Commission today announced that it would devote €1.5 billion to AI research funding until 2020. It also said it would present ethical guidelines on AI development by the end of the year, suggesting that Europe could become a precautionary counterweight to its global rivals in a field that has raised fears about a lack of fairness and transparency even as it has made great advances.

Both the United States and China practice “permissionless innovation: Break things as you go and go fast,” says Eleonore Pauwels, a Belgian ethics researcher at the United Nations University in New York City. In contrast, Europeans “are betting on being the good guy,” she says. This could mean, for instance, developing AI systems that require smaller data sets, enhance privacy and trust, and are more transparent than their competitors, Pauwels says. “This is noble, but I don’t know if they have the means of their politics.”

The European measures come 1 month after France presented its own AI intentions, and a week after a U.K. Parliament report urged the government to draw up a policy to help the country become one of the world’s AI leaders.

The commission says it will fund basic research as well as research that could be spun off into the market, and it intends to help member states set up joint research centers across Europe. It also plans to update rules on the reuse of public sector information to include publicly available science and health data, the raw material needed to train many AI technologies. This plan follows a declaration signed on 10 April by 25 European countries, in which governments agreed to work together on AI and to consider AI research funding “as a matter of priority.” But that statement is nonbinding and does not set actual spending goals.

These policy announcements are largely based on the idea that Europe must catch up with the United States and China on AI. Jeffrey Ding, who studies AI governance at the University of Oxford in the United Kingdom and monitors the AI potential of different countries, finds that China trails the United States in every factor except access to data. He says Europe has a strong AI research, but a weak AI industry, in part because venture capital funding of AI startups in the United States and China dwarfs that of Europe.

Stéphan Eloïse Gras, a French digital humanities researcher at New York University (NYU) in New York City, says Europe’s ambitions are hindered by outdated industrial policies that provide too much support to big, risk-averse firms and not enough for risky startups. “We also need to come up with binding metrics that measure the human value of technological startups in other ways than user figures,” she says. Building humanities and social sciences—in which Europe has a strong tradition—into AI can help make sure that ethics is an integral part of these developments, rather than a detached musing or an afterthought, Gras adds.

There is indeed a “European angle on AI” that values privacy, transparency, and fairness, says Bernhard Schölkopf, a machine learning researcher at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. “However, it would be short-sighted for Europe to only focus on potential problems and let others push the boundaries of knowledge,” Schölkopf adds. “We do not yet understand well how to make [AI] systems robust, or how to predict the effect of interventions.”

Another issue for Europe is attracting researchers in a field where salaries have become astronomical. Europe does have world-class AI researchers, but it struggles to keep them, says Jean Ponce, an artificial vision researcher at France’s Ecole Normale Supérieure in Paris, who spent 22 years working in the United States and is now working on a French-U.S. AI agreement at NYU. Private firms may poach public researchers, but they need academia to keep producing knowledge and training engineers and researchers, Ponce says. High salaries are not everything: “As an academic, you have freedom to do what you want, and that’s not negligible.”

On 24 April, a group of nine prominent AI researchers, including Schölkopf, took matters into their own hands and offered suggestions in an open letter. They urge governments to set up an intergovernmental European Lab for Learning and Intelligent Systems (ELLIS), inspired by the European Molecular Biology Laboratory. ELLIS would be a “top employer in machine intelligence research,” and on par with leading world universities, the letter says, offering attractive salaries and “outstanding academic freedom and visibility.”