"There will always be a place for humans"

"Technology is not neutral at all, there are always political and social meanings attached to it", says philosopher Mark Coeckelbergh. (Photo: University of Vienna)

In line with the current Semester Question "How are we living in the digital future?", the philosopher Mark Coeckelbergh addresses with his research the cause and effects of digitisation entering our lives. In this interview he offers his take on the growing alienation of humans from their environment.

uni:view: Your research focuses on the philosophy of technology, the understanding and evaluation of new developments in robotics and artificial intelligence. What are the current developments?Mark Coeckelbergh: There are various developments, for example robots in healthcare and self-driving cars. In the case of smart technologies, algorithms are selecting things for us when we are using the Internet and social media. I think that the developments in automation are going to affect our lives more, and sooner than in the case of artificial intelligence. It's not going as fast as people think sometimes. But smart technologies are directly impacting our lives already and bring the biggest progress for our future. In that sense, the machine is already here.

uni:view: Smart technologies are invented to make our lives easier and more comfortable. At the same time people seem to lose certain abilities – like reading maps, because the navigator tells them where to go. Is this growing dependence on technology a problem?Coeckelbergh: Yes, we lose skills to do certain things. But the loss of skills doesn't need to be a problem. For example, I am not able to ride a horse or a cart with a horse, and nowadays that is not a problem. What is a problem is that, if we have more and more machines between us and our environment, we relate less to our surroundings, to other people and to the natural world. This could be dangerous, for example in the case of self-driving cars: A pedestrian or a biker cannot really relate to the driver anymore as another social being, because the driver is an algorithm. And coming back to the navigator: We watch it and we are so busy with getting from A to B – because that is where the machine is directing us – that we might not see things next to the road. The more devices and distance we have between us and the environment the less we feel a direct relation, the less engaged we are. So we might not notice a vulnerable human being.

uni:view: How can we avoid this kind of alienation from the environment?Coeckelbergh: I don't think we can or want to go back to the old times, but maybe we can have smart technologies that enable us to do things in an engaged way. The solution is not to ban certain technologies but rather to change and regulate them in such a way that they contribute to improve human lives and relations with people. I think this can be done. What we need, instead of people complaining about technologies, are citizens and intellectuals who proactively contribute to a more ethical development of technologies.

Of course, it is important to see the risks that might be there. It is important to think about ethical problems. Personally, I would like to stress the social and political side as well. Up to now, industrial technology and automation have caused unemployment in factories. What will happen next if we automate more things in services, even in professions like journalism or law? Will these people lose their jobs? Will new jobs be created? Or will we find new ways for humans to collaborate with machines?

uni:view: Loss of jobs… That sounds like "the machines are taking over" …Coeckelbergh: I don't think they should, but I also think they can't. Because humans have an ethical judgement and we use our emotions. We take into account situations and then we adapt, find creative solutions. I think a machine can never do that, and it is dangerous if some technology developers give us the impression that the machine can do everything. In my opinion, there will always be a place for humans.

uni:view: But not today’s place?Coeckelbergh: We shouldn't always think that everything is about replacement. What often happens is that humans and machines work together. I use my computer and my phone to do certain things, you could say we work together. And it's not that the robots are coming in the sense that they take over everything. For me the question is more about how we are going to do things together. This is the challenge: that we learn how to work together with technology to improve the quality of human lives, of human relations and in general to create a better and more just society. I find it philosophically interesting, because these issues raise questions like: What is good healthcare? What is good education? What is the human quality in all of it?

The European projectDREAM, funded by the European Union, develops robots to be used in therapy for children with autism spectrum disorder. The robot is not remote controlled but is autonomous: It can interact with the child and assess the behaviour of the child; in the meantime the therapist supervises what happens. It is a technical project, but the ethics research, led by Mark Coeckelbergh, is part of the project in order to ensure that the development of the robots and the therapy are done in an ethical way.

uni:view: Healthcare in Japan is already quite different than it is here. A lot of robots are used in daily healthcare. Are there the same discussions going on concerning the alienation from human relations? Coeckelbergh: Japanese people have less problems with accepting these machines. Because in their tradition – I think mainly of Shintoism here – it is not such a problem that objects play an important role. In the West we have this idea that subjects are radically different than objects. As a Western person I partly agree with that, but it is interesting to explore different world views. Another reason for the vast amount of "working robots" in general is that it is said that Japanese don't want immigrant workers to come to work in their country. Therefore, they put robots in place. If this is true, then to me this shows again that a technology issue is always directly related to a social issue. Technology is not neutral at all, there are always political and social meanings attached to it.

Q&A with Mark Coeckelbergh:On the 10thofOctober and 18thof October, Mark Coeckelbergh will explain on "derStandard.at/semesterfrage" why we shouldn't fear artificial intelligence and be very aware of the slow changes automation causes, like alienation from the environment. In a commentary and a Q&A-article he will answer questions from the community. Take part in the discussion!

uni:view: Coming to artificial intelligence. Emotions and empathy are very human qualities. Do you think that there will be an algorithm for them someday?Coeckelbergh: If you look at how empathy and emotions work in humans, it has a lot to do with the fact that we are embodied. This body makes us vulnerable, and we experience the possibility of harm, the risk. Artificial intelligence that is running in a computer lacks embodiment and the vulnerability that comes with it. For a machine, in a way, nothing is at stake, it cannot lose, it cannot be hurt and that's why I think it is really dangerous to let it make a lot of decisions, because it can harm human beings.

uni:view: The philosopher Nick Bostrom from the University of Oxford supports the thesis that artificial intelligence will get smarter than humankind and destroy humankind in the end. What do you think about his theory?Coeckelbergh: What is true is that machines already overtook humankind in the case of playing chess or Go. That is something that we have to accept. But I think machines of the future will still lack various other forms of intelligence like improvisation and emotions. Nick Bostrom's idea is a so-called singularity, meaning that he believes in a single turning point sometime in the future when suddenly everything changes. In my opinion, technological development is not as spectacular, there won't be a quasi-religious turning point when everything turns upside down and the machines take over. For me, the really dangerous things are the invisible, slow changes that leave us very different after a while but we don't notice. I fear that we sort of blindly walk into the future without noticing the changes to technology, like algorithms getting smarter and smarter, but also to ourselves, to humans. I think it will be a gradual change, but a change for sure.

uni:view: Thank you for the interview! (td)

About Mark Coeckelbergh:Since December 2015, Mark Coeckelbergh has been Professor of Philosophy at the Department of Philosophy of the University of Vienna. His research focuses on the philosophy of technology and media, in particular on understanding and evaluating new developments in robotics, artificial intelligence and (other) information and communication technologies. Born in 1975 in Leuven, Belgium, he studied Philosophy and Political Sciences. After having started his career in his hometown, he further pursued his academic career in the Netherlands and the UK. Besides his regular teaching and research activities, Coeckelbergh is engaged in a number of academic efforts to ensure the ethical development of robots, such as the EU-funded DREAM2020 project.