Gerd Leonhard’s thoughts, finds and other comments on the future of ethics in a digital world

“Watson and "The End of Programming": "Everything you know until today is programmable—an entire era for decades has been programmable. Watson would be the beginning of a new era where you didn’t program. Machines would look at data, understand, reason over it, and they continue to learn: understand, reason and learn, not program, in my simple definition."”

“The results are sobering. If there is a one in 100 chance that somebody with the technology will release it, and there are a few hundred individuals like this, then our civilization is doomed on a timescale of 100 years or so. If there are 100,000 individuals with this technology, then the probability of them releasing it needs to be less than one in 109 for our civilization to last 1,000 years.”

“Ma, a former teacher, says he always warns government leaders to also “pay attention to education,” because right now we’re teaching children the wrong thing: that machines are better than humans. He believes this mindset will cost young people jobs in a future dominated by AI and computing.”

“Kurt Vonnegut said: “We are what we pretend to be, so we must be careful who we pretend to be.” This seems especially true now we have reached a new stage of marketing where we are not just consumers, but also the thing consumed. If you have friends you only ever talk to on Facebook, your entire relationship with them is framed by commerce. When we willingly choose to become unpaid content providers, we commercialise ourselves. And we are encouraged to be obsessed with numbers (of followers, messages, comments, retweets, favourites), as if operating in a kind of friend economy, an emotional stock market where the stock is ourselves and where we are encouraged to weigh our worth against others.”

“Kurt Vonnegut said: “We are what we pretend to be, so we must be careful who we pretend to be.” This seems especially true now we have reached a new stage of marketing where we are not just consumers, but also the thing consumed. If you have friends you only ever talk to on Facebook, your entire relationship with them is framed by commerce. When we willingly choose to become unpaid content providers, we commercialise ourselves. And we are encouraged to be obsessed with numbers (of followers, messages, comments, retweets, favourites), as if operating in a kind of friend economy, an emotional stock market where the stock is ourselves and where we are encouraged to weigh our worth against others.”

“But we shouldn’t accept Facebook’s self-conception as sincere, either. Facebook is a carefully managed top-down system, not a robust public square. It mimics some of the patterns of conversation, but that’s a surface trait.

In reality, Facebook is a tangle of rules and procedures for sorting information, rules devised by the corporation for the ultimate benefit of the corporation. Facebook is always surveilling users, always auditing them, using them as lab rats in its behavioural experiments. While it creates the impression that it offers choice, in truth Facebook paternalistically nudges users in the direction it deems best for them, which also happens to be the direction that gets them thoroughly addicted. It’s a phoniness that is most obvious in the compressed, historic career of Facebook’s mastermind.”

“New technologies like those can be expected to bring with them a series of excruciating moral, political, and diplomatic choices for America and other nations. Building up a new breed of military equipment using artificial intelligence is one thing—deciding what uses of this new power are acceptable is another. The report recommends that the US start considering what uses of AI in war should be restricted using international treaties.

New World Order

The US military has been funding, testing and deploying various shades of machine intelligence for a long time. In 2001, Congress even mandated that one-third of ground combat vehicles should be uncrewed by 2015—a target that has been missed. But the Harvard report argues that recent, rapid progress in artificial intelligence that has invigorated companies such as Google and Amazon is poised to bring an unprecedented surge in military innovation. “Even if all progress in basic AI research and development were to stop, we would still have five or 10 years of applied research,” Allen says.”

“Artificial intelligence is the future, not only for Russia but for all humankind,” he said, via live video beamed to 16,000 selected schools. “Whoever becomes the leader in this sphere will become the ruler of the world.””

“I think one of the examples that illustrates this really well is in the medical space, where Watson is helping doctors make decisions and parsing large quantities of data, but then ultimately working with them on a diagnosis in partnership. Can you talk a little bit about how that training takes place and then how the solution winds up delivering better outcomes?

The work that we've done in oncology is a good example of where really it's a composition of multiple different kinds of algorithms that, across the spectrum of work that needs to be performed, are used in different ways. We start with, for example, looking at the medical record, looking at your medical record and using the cognitive system to look over all the notes that the clinicians have taken over the years that they've been working with you and finding what we call pertinent clinical information. What is the information in those medical notes that are now relevant to the consultation that you're about to go into? Taking that, doing population similarity analytics, trying to find the other patients, the other cohorts that have a lot of similarity to you, because that's going to inform the doctor on how to think about different treatments and how those treatments might be appropriate for you and how you're going to react to those treatments.

Then we go into what we call the standard of care practices, which are relatively well-defined techniques that doctors share on how they're going to treat different patients for different kinds of diseases, recognizing that those are really designed for the average person. Then we lay on top of that what we call clinical expertise. Having been taught by the best doctors in different diseases what to look for and where the outliers are and how to reason about the different standard of care practices, which of those is most appropriate or how to take the different pathways through those different care practices and now apply them in the best way possible, but finally going in and looking at the clinical literature, all the hundreds of thousands, 600,000 articles in PubMed about the advances in science that have occurred in that field that are relevant to now making this treatment recommendation.”

“Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.”

Digital Ethics by Futurist Gerd Leonhard

Gerd Leonhard, Futurist and Humanist, Author, Keynote Speaker, CEO The Futures Agency, Zurich / Switzerland
Gerd Leonhard is a hunter and gatherer of human values from the future. From culture and society to commerce and technology, Gerd brings back the news from the future so business and society leaders can make better choices right now. In his latest book, Technology vs Humanity, Gerd explores the key ethical and social questions which urgently require an answer before we increasingly abdicate our very humanity. For organizations in the grip of disruption, Gerd supplies visionary insights and concentrated wisdom that informs key decisions makers today. A musician by origin, Gerd Leonhard has now redefined the vocation of futurist as a new humanist.
Gerd was listed as one of the top 100 influencers in technology by Wired magazine (2015).