Recap: the International Forum of the Americas’ Discussion on AI

Thursday, June 14th, concluded the 4-day International Forum of the Americas in Montreal, self-described as “committed to heightening knowledge and awareness of the major issues concerning economic globalization, with a particular emphasis on the relations between the Americas and other continents.” The Forum hosted over 4000 participants and more than 200 speakers, and took place on the heels of this year’s G7 summit in Charlevoix, Quebec.

Most notable was the theme of technological disruption, specifically that of artificial intelligence, which seemed to underlay many of the discussions. The plenary session dedicated to this topic brought in experts at the top of the field: Abhay Parasnis (CTO, Adobe), Sylvain Duranton (Global Leader, BCG Gamma), and Yoshua Bengio (Scientific Director, MILA; Professor, Université de Montréal; Co-Founder, Element AI; Scientific Director, IVADO). They discussed the matter of Competing in the Age of AI, which was moderated by Axios’ Technology Reporter, Erica Pandey.

Pandey set the tone for the conversation, opening with the claim, “there is enormous potential for AI to stimulate economic development, but we cannot overlook concerns about its societal impact.” Parasnis made note of Amara’s Law, which suggests that the tech industry tends to overestimate the short-term disruptive impact of its innovations but underestimate its long-term impacts. This notion further underscores the need to have these kinds of collective discussions about AI, in light of the fact that it is typically long-term impacts that shape society the most. The two main takeaways in this regard were as follows.

First, artificial intelligence must benefit everyone. Bengio pointed out that without this careful consideration, at a micro-level those already at the seat of power will be able to further consolidate their positions: they will be able to hold onto more talented people, onto more money, and onto more data than those for whom artificial intelligence is inaccessible due to a lack of any one of those factors. At a macro-level, countries with the means to take advantage of artificial intelligence through investment in research, for example, will further exacerbate the gap between rich and poor countries.

That being said, among top executives polled globally on the impact of AI, 80% believed that AI will change the world, with 40-50% of people believing that their jobs will be replaced by machines. But in OECD countries, only 30% of the polled workforce said they had discussed this matter with their boss, compared to 79% in China. This discrepancy points to a need by OECD companies to better address their change management strategies in order to keep up with China’s, or risk losing talent, market-share and overall global standing as a consequence of the rise in AI adoption. While some of this change management can be sourced through the hiring of new talent in the field of AI, ultimately, the long-term most financially-sound strategy is to commit to retraining your employees and building in-house talent to face these disruptions. Still, and as Bengio was quick to reassure, while some jobs might become redundant through automation, automation of those tasks which can be automated will allow people to focus on the more human-aspects of a job, effectively meaning that those jobs end up being done better as a result.

Second, the ethical tenets of artificial intelligence must be well-established. Parasnis reflected on this, noting that whenever society faces uncharted territory, people have “the ethics discussion” as though we’re not in control of what the ethics and values of said disruptive force are going to be. Still, he also pointed out that as machines and algorithms become increasingly present in our lives, they are going to exacerbate existing inequalities and discriminatory practices. There are multiple reasons for this. The first has to do with the fact that machine learning is developed by inputting real-world data into an algorithm, and needless to say, the real-world is full of inequality and discrimination. The second reason follows from this: whereas humans can apply their own moral judgment and code of ethics to taper inequality and discrimination, machines cannot. Bengio highlighted that the same fear underlying the prospect of “killer robots” is the same one we must consider when having the conversation about AI aggravating discriminations, in that robots will not have the same moral compass that most adult humans have. For that reason, frameworks such as the Montreal Declaration for a Responsible Development of Artificial Intelligence are being brought to the public as open participatory processes in order to encourage all cuts of society to contribute to the future of AI. At its core, this document seeks to guide the development of AI by grounding it in seven primary principles: well-being, autonomy, justice, privacy, knowledge, democracy, and responsibility.

In response to the question about how Montreal has become the hub for AI, Bengio pointed to multiple factors. The first relates to the bet that a few institutions — namely McGill University and Université de Montréal — made on researchers studying a niche topic in computer science in the late 1990s. This bet, according to Bengio, allowed those researchers to develop a group which is now the largest deep-learning group in the world, and has consequently led to a snowball effect of university and government investment in deep-learning research labs and organizations in Montreal. What this has meant is that whereas prior to this, Canadian startups would move to Silicon Valley once they could afford to do so, they are now realizing that they are better off moving to or staying in Montreal given the city’s concentration of AI researchers. The second point speaks to the nature of Montreal’s community culture, which influences how organizations work together. More than that though, in the AI space, this community culture has been multiplied by the fact that many people at those organizations working with AI are former MILA students.

In sum, Parasnis pointed out that 25 years ago, if you had asked the general public if they were ready for GPS satellites, most people either would not have cared, or would not have understood the impact GPS satellites would have on their day-to-day lives or in the long-term. While GPS satellites may not have had the same cross-cutting societal impact that artificial intelligence will arguably have, thinking of this example is a useful way of trying to understand the chasm between specialists’ and policy-makers understanding of the topic, and that of the general public, not least when considering efforts by the Montreal Declaration to include broader cuts of the citizenry in the development of a framework for the responsible development of AI. It is also worth highlighting that, if done right, these disruptive technological shifts can contribute to facilitating massive progress in the erasure of inequalities. Take the smartphone, for example, which allowed entire demographics to leapfrog certain aspects of development: (i.e. there are people in the world who don’t have access to running water, but who have Facebook accounts). Finally, and perhaps most importantly, as Bengio highlighted in the context of trust and data in artificial intelligence, “we need to do this right.” The consequences of not “doing it right” have been briefly touched upon here, but broadly also include losing the trust of the public, which for progress’ sake, we cannot afford.