Artificial intelligence: Between myth and reality

Are machines likely to become smarter than humans? No, says Jean-Gabriel Ganascia: this is a myth inspired by science fiction. The computer scientist walks us through the major milestones in artificial intelligence (AI), reviews the most recent technical advances, and discusses the ethical questions that require increasingly urgent answers.

A scientific discipline, AI officially began in 1956, during a summer workshop organized by four American researchers – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – at Dartmouth College in New Hampshire, United States. Since then, the term “artificial intelligence”, probably first coined to create a striking impact, has become so popular that today everyone has heard of it. This application of computer science has continued to expand over the years, and the technologies it has spawned have contributed greatly to changing the world over the past sixty years.

However, the success of the term AI is sometimes based on a misunderstanding, when it is used to refer to an artificial entity endowed with intelligence and which, as a result, would compete with human beings. This idea, which refers to ancient myths and legends, like that of the golem [from Jewish folklore, an image endowed with life], have recently been revived by contemporary personalities including the British physicist Stephen Hawking (1942-2018), American entrepreneur Elon Musk, American futurist Ray Kurzweil, and proponents of what we now call Strong AI or Artificial General Intelligence (AGI). We will not discuss this second meaning here, because at least for now, it can only be ascribed to a fertile imagination, inspired more by science fiction than by any tangible scientific reality confirmed by experiments and empirical observations.

For McCarthy, Minsky, and the other researchers of the Dartmouth Summer Research Project (link is external)on Artificial Intelligence, AI was initially intended to simulate each of the different faculties of intelligence – human, animal, plant, social or phylogenetic – using machines. More precisely, this scientific discipline was based on the conjecture that all cognitive functions – especially learning, reasoning, computation, perception, memorization, and even scientific discovery or artistic creativity – can be described with such precision that it would be possible to programme a computer to reproduce them. In the more than sixty years that AI has existed, there has been nothing to disprove or irrefutably prove this conjecture, which remains both open and full of potential.

Uneven progress

In the course of its short existence, AI has undergone many changes. These can be summarized in six stages.

The time of the prophets

First of all, in the euphoria of AI’s origins and early successes, the researchers had given free range to their imagination, indulging in certain reckless pronouncements for which they were heavily criticized later. For instance, in 1958, American political scientist and economist Herbert A. Simon – who received the Nobel Prize in Economic Sciences in 1978 – had declared that, within ten years, machines would become world chess champions if they were not barred from international competitions.

The dark years

By the mid-1960s, progress seemed to be slow in coming. A 10-year-old child beat a computer at a chess game in 1965, and a report commissioned by the US Senate in 1966 described the intrinsic limitations of machine translation. AI got bad press for about a decade.

Semantic AI

The work went on nevertheless, but the research was given new direction. It focused on the psychology of memory and the mechanisms of understanding – with attempts to simulate these on computers – and on the role of knowledge in reasoning. This gave rise to techniques for the semantic representation of knowledge, which developed considerably in the mid-1970s, and also led to the development of expert systems, so called because they use the knowledge of skilled specialists to reproduce their thought processes. Expert systems raised enormous hopes in the early 1980s with a whole range of applications, including medical diagnosis.

Technical improvements led to the development of machine learning algorithms, which allowed computers to accumulate knowledge and to automatically reprogramme themselves, using their own experiences.

This led to the development of industrial applications (fingerprint identification, speech recognition, etc.), where techniques from AI, computer science, artificial life and other disciplines were combined to produce hybrid systems.

From AI to human-machine interfaces

Starting in the late 1990s, AI was coupled with robotics and human-machine interfaces to produce intelligent agents that suggested the presence of feelings and emotions. This gave rise, among other things, to the calculation of emotions (affective computing), which evaluates the reactions of a subject feeling emotions and reproduces them on a machine, and especially to the development of conversational agents (chatbots).

Renaissance of AI

Since 2010, the power of machines has made it possible to exploit enormous quantities of data (big data) with deep learning techniques, based on the use of formal neural networks. A range of very successful applications in several areas – including speech and image recognition, natural language comprehension and autonomous cars – are leading to an AI renaissance.

Applications

Many achievements using AI techniques surpass human capabilities – in 1997, a computer programme defeated the reigning world chess champion, and more recently, in 2016, other computer programmes have beaten the world’s best Go [an ancient Chinese board game] players and some top poker players. Computers are proving, or helping to prove, mathematical theorems; knowledge is being automatically constructed from huge masses of data, in terabytes (1012 bytes), or even petabytes (1015 bytes), using machine learning techniques.

As a result, machines can recognize speech and transcribe it – just like typists did in the past. Computers can accurately identify faces or fingerprints from among tens of millions, or understand texts written in natural languages. Using machine learning techniques, cars drive themselves; machines are better than dermatologists at diagnosing melanomas using photographs of skin moles taken with mobile phone cameras; robots are fighting wars instead of humans; and factory production lines are becoming increasingly automated.

Scientists are also using AI techniques to determine the function of certain biological macromolecules, especially proteins and genomes, from the sequences of their constituents ‒ amino acids for proteins, bases for genomes. More generally, all the sciences are undergoing a major epistemological rupture with in silico experiments – named so because they are carried out by computers from massive quantities of data, using powerful processors whose cores are made of silicon. In this way, they differ from in vivo experiments, performed on living matter, and above all, from in vitro experiments, carried out in glass test-tubes.

Today, AI applications affect almost all fields of activity – particularly in the industry, banking, insurance, health and defence sectors. Several routine tasks are now automated, transforming many trades and eventually eliminating some.

What are the ethical risks?

With AI, most dimensions of intelligence ‒ except perhaps humour ‒ are subject to rational analysis and reconstruction, using computers. Moreover, machines are exceeding our cognitive faculties in most fields, raising fears of ethical risks. These risks fall into three categories – the scarcity of work, because it can be carried out by machines instead of humans; the consequences for the autonomy of the individual, particularly in terms of freedom and security; and the overtaking of humanity, which would be replaced by more “intelligent” machines.

However, if we examine the reality, we see that work (done by humans) is not disappearing – quite the contrary – but it is changing and calling for new skills. Similarly, an individual’s autonomy and freedom are not inevitably undermined by the development of AI – so long as we remain vigilant in the face of technological intrusions into our private lives.

Finally, contrary to what some people claim, machines pose no existential threat to humanity. Their autonomy is purely technological, in that it corresponds only to material chains of causality that go from the taking of information to decision-making. On the other hand, machines have no moral autonomy, because even if they do confuse and mislead us in the process of making decisions, they do not have a will of their own and remain subjugated to the objectives that we have assigned to them.

Related

French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University Paris. He is also a researcher at LIP6 the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.

Deloitte Unveils 2018 North America Technology Fast 500™ Rankings

Deloitte today released the “2018 North America Technology Fast 500,” an annual ranking of the fastest-growing North American companies in technology, media, telecommunications, life sciences and energy tech sectors. SwanLeap claimed the top spot with a growth rate of 77,260 percent from 2014 to 2017.

SwanLeap, is a leading end-to-end transportation technology provider for logistics managers and supply chain decision-makers. Founded in 2013, SwanLeap uses artificial intelligence and machine learning to reduce costs for corporate shippers and improve their supply chain performance. Its new technology is helping clients secure an annual average transportation savings of 27 percent. SwanLeap is one of the two Madison, Wisconsin-based companies in the top 10 this year.

Awardees are selected for this honor based on percentage fiscal year revenue growth from 2014 to 2017. Overall, the 2018 Technology Fast 500 companies achieved revenue growth ranging from 143 percent to 77,260 percent over the three-year time frame, with a median growth rate of 412 percent.

“Congratulations to the Deloitte 2018 Technology Fast 500 winners on this impressive achievement,” said Sandra Shirai, vice chairman, Deloitte LLP, and U.S. technology, media and telecommunications leader. “These companies are innovators who have converted their disruptive ideas into useful products, services and experiences that can captivate new customers and drive remarkable growth.”

“It is both humbling and validating for SwanLeap to be listed as the No. 1 fastest-growing company on the Deloitte Fast 500,” said Brad Hollister, CEO and co-founder of SwanLeap. “Our team has worked relentlessly to deliver unprecedented clarity and control to a fragmented shipping market through technology powered by artificial intelligence, curating cost-effective and personalized supply chain recommendations in real time. We are grateful to our employees and customers for making this achievement possible.”

The Technology Fast 500’s top 10 include:

2018 Rank

Company

Sector

Revenue Growth (2014 to 2017)

City, State

1

SwanLeap

Software

77,260 percent

Madison, Wisconsin

2

Justworks

Software

27,150 percent

New York, New York

3

Shape Security

Software

23,576 percent

Mountain View, California

4

Periscope Data

Software

23,227 percent

San Francisco, California

5

Arrowhead Pharmaceuticals Inc.

Biotechnology/
pharmaceutical

17,847 percent

Pasadena, California

6

Viveve Medical Inc.

Medical devices

16,887 percent

Englewood, Colorado

7

iLearningEngines

Software

14,848 percent

Bethesda, Maryland

8

Exact Sciences Corp.

Biotechnology/pharmaceutical

14,694 percent

Madison, Wisconsin

9

Podium

Software

13,381 percent

Lehi, Utah

10

Markforged

Electronic devices/hardware

12,687 percent

Watertown, Massachusetts

Silicon Valley has largest share of winners
Deloitte’s Technology Fast 500 winners represent more than 38 states and provinces across North America.

California’s Silicon Valley continues to produce fast-growing companies, leading regional representation with 18 percent of this year’s Fast 500. The New York metro area also fared well with 14 percent of the companies; New England and Greater Washington, D.C., areas followed with 7 percent each, and Greater Los Angeles accounted for 6 percent. Following is a summary of the 2018 ranking by regions with a significant concentration of winners:

Software continues to dominate the list for the 23rd straight yearSoftware companies continue to deliver the highest growth rates for the 23rd straight year, representing 64 percent of the entire list and six of the top 10 winners overall. Of the private companies on the list, 34 percent identify themselves as part of the software as a service (SaaS) subsector, 17 percent in the enterprise software subsector, and 9 percent in fintech. Since the creation of the ranking, software companies have consistently made up the majority of winners, with a median growth rate of 412 percent in 2018.

Digital content, media and entertainment companies make up the second most prevalent sector in this year’s rankings, accounting for 12 percent of the Fast 500 companies and achieving a median growth rate of 385 percent in 2018. Biotechnology/pharmaceutical companies rank third at 11 percent of the list with a median growth rate of 411 percent.

The Technology Fast 500 by industry sector:

Sector

Percentage

Sector Leader

Median Revenue Growth (2014 to 2017)

Software

64 percent

SwanLeap

412 percent

Digital content/media/entertainment

12 percent

Remark Holdings Inc.

385 percent

Biotechnology/pharmaceutical

11 percent

Arrowhead Pharmaceuticals Inc.

411 percent

Medical devices

5 percent

Viveve Medical Inc.

396 percent

Communications/networking

3 percent

xG Technology Inc.

394 percent

Electronic devices/hardware

3 percent

Markforged

410 percent

Semiconductor

1 percent

Aquantia Corp.

206 percent

Energy tech

1 percent

Momentum Solar

693 percent

Four out of five companies received venture backingIn the 2018 Fast 500 rankings, 80 percent of the companies were backed by venture capital at some point in their company history. Notably, 25 of the top 30 companies on the Technology Fast 500 in 2018 received venture funding.

“Software, which accounts for nearly two of every three companies on the list, continues to produce the most exciting technologies of the 21st century, including innovations in artificial intelligence, predictive analytics and robotics,” said Mohana Dissanayake, partner, Deloitte & Touche LLP and industry leader for the technology, media and telecommunications industry, within Deloitte’s audit and assurance practice. “This year’s ranking demonstrates what is likely a national phenomenon, where many companies from all parts of America are transforming the way we do business by combining breakthrough research and development, entrepreneurship and rapid growth.”

Related

Quantum Technologies Flagship kicks off with first 20 projects

The Quantum Technologies Flagship, a €1 billion initiative, was launched today at a high-level event in Vienna hosted by the Austrian Presidency of the Council of the EU.

The Flagship will fund over 5,000 of Europe’s leading quantum technologies researchers over the next ten years and aims to place Europe at the forefront of the second quantum revolution. Its long term vision is to develop in Europe a so-called quantum web, where quantum computers, simulators and sensors are interconnected via quantum communication networks. This will help kick-starting a competitive European quantum industry making research results available as commercial applications and disruptive technologies. The Flagship will initially fund 20 projects with a total of €132 million via the Horizon 2020 programme, and from 2021 onwards it is expected to fund a further 130 projects. Its total budget is expected to reach €1 billion, providing funding for the entire quantum value chain in Europe, from basic research to industrialisation, and bringing together researchers and the quantum technologies industry.

Andrus Ansip, Commission Vice-President for the Digital Single Market, said: “Europe is determined to lead the development of quantum technologies worldwide. The Quantum Technologies Flagship project is part of our ambition to consolidate and expand Europe’s scientificexcellence. If we want to unlock the full potential of quantum technologies, we need to develop a solid industrial base making full use of our research.”

Mariya Gabriel, Commissioner for Digital Economy and Society, added: “The Quantum Technologies Flagship will form a cornerstone of Europe’s strategy to lead in the development of quantum technologies in the future. Quantum computing holds the promise of increasing computing speeds by orders of magnitude and Europe needs to pool its efforts in the ongoing race towards the first functional quantum computers.”

In the early 20th century, the first quantum revolution allowed scientists to understand and use basic quantum effects in devices, such as transistors and microprocessors, by manipulating and sensing individual particles.

The second quantum revolution will make it possible to use quantum effects to make major technological advances in many areas including computing, sensing and metrology, simulations, cryptography, and telecommunications. Benefits for citizens will ultimately include ultra-precise sensors for use in medicine, quantum-based communications, and Quantum Key Distribution (QKD) to improve the security of digital data. In the long term, quantum computing has the potential to solve computational problems that would take current supercomputers longer than the age of the universe. They will also be able to recognise patterns and train artificial intelligence systems.

Next steps

From October 2018 until September 2021, 20 projects will be funded by the Flagship under the coordination of the Commission. They will focus on four application areas – quantum communication, quantum computing, quantum simulation, quantum metrology and sensing – as well as the basic science behind quantum technologies. More than one third of participants are industrial companies from a wide range of sectors, with a large share of SMEs.

Negotiations are ongoing between the European Parliament, Council and Commission to ensure that quantum research and development will be funded in the EU’s multi-annual financial framework for 2021-2028. Quantum technologies will be supported by the proposed Horizon Europe programme for research and space applications, as well as the proposed Digital Europe programme, which will develop and reinforce Europe’s strategic digital capacities, supporting the development of Europe’s first quantum computers and their integration with classical supercomputers, and of a pan-European quantum communication infrastructure.

Background

Since 1998, the Commission’s Future and Emerging Technologies (FET) programme has provided around €550 million of funding for quantum research in Europe. The EU has also funded research on quantum technologies through the European Research Council (ERC). Only since 2007, the ERC has funded more than 250 research projects related to quantum technologies, worth some 450 million euro.

The Quantum Technologies Flagship is currently supported by Horizon 2020 as part of the FET programme, which currently runs two other Flagships (The Graphene Flagship and the Human Brain Project Flagship). The FET programme promotes large-scale research initiatives to drive major scientific advances and turn them into tangible innovations creating benefits for the economy and society across Europe. Funding for the Flagship project comes from Horizon 2020, its successor programme Horizon Europe and national funding.

The Quantum Technologies Flagship is also a component of the Commission’s European Cloud Initiative launched in April 2016, as part of a series of measures to support and link national initiatives for the digitisation of Europe’s industry.

Related

Russiagate and the current challenges of cyberspace: Interview with Elena Chernenko

PICREADI presents an interview with a prominent Russian expert in journalism and cybersecurity Elena Chernenko, Deputy head of Foreign Desk at the Kommersant daily newspaper in Moscow. The talk is about hackers, Russiagate and current challenges of the cyberspace.