speech will be around 20 minutes and then there's a possibility to ask some questions I think so much for coming to you me talk about the the problem the Trollius and and I I'm

00:35

my name is my I'm diethyl candidate at the University of life on a university and where I I just added a PhD on this topic I am also the directive applied research a Tactical

00:48

Tech here in Berlin and but had when I speak today I'm not speaking as an engineer at or a technologist and speaking as a culture studies Carlo from a science and technology perspective so it's interesting to be part of this stage on mobility considering some of the torques that have already happened here today and and something to talk about and the problem the trolleys but before trolleys um I want to start and somewhere else I want to start with the dog in Wales a couple of other months ago I call a a funny story on itself and it was about how and on a dog had somehow found its way onto a highway on a motorway in the north of Wales and it was running around and it was very distressed by the high-speed traffic and that the truckers the motorists will also very distressed by this dog on the highway

01:44

and sorry the the from and the police got involved and they said kill the dog if you have to if you have to run over the dog do it because it's putting motorists at risk in you have like trucks with 16 wheels and things so um I think the dollar Gudrun noble but the next day Sky News picked up the story and that was all of the social media outrage about killing the dog so OK so you can actually see it quite well on this so so you can see the kind of comments people were very very upset that the dog was killed and it like didn't always patent was but then the fact that it belonged to somebody at the the feelings of the dog and that the responsibility of the police all of these things sort of came into the picture and I was reading this story and it was

02:36

sort of exactly around some of the problems we have to confront when we think about what is it mean to and engage with the trolley problem so what is the trolley problem I'm going to talk about the trolley problem as a legal scholar called Judith Thompson has spoken about it and many people have written about the trolley problem and it's a long description but I put it up here and then read it and I quote from a 1985 article actually about it some years ago Philippa Foot's drew attention to an extraordinarily interesting

03:07

problem suppose you are the driver of the trolley the trolley rounds abandoned their come into view ahead 5 track what men who have been repairing the track the track goes through a bit of a valley and at that point the sides of steep so you must stop the trolley if you want to avoid running the 5 men down you step on the brakes but alas they don't walk now you see suddenly a spot of track leading off to the right you can tell trolley onto it and thus save the 5 men on the track straight ahead unfortunately misses photos range that there is 1 track what men on that spot of track he can no more get off the track in time than the 5 can so you will kill him if you have to turn the trolley onto him is it morally permissible for you to turn the trotting I and just so that I don't have to spend all this time talking I also have a little video by a scholar Kilpatrick Lin was done a lot of work on this so it's just a little clip from video

04:04

which sort of applies this trolley problem to the context of driverless cars because the trolley problem is what's being used to um to train machine learning software in driverless cars and this works and the sound of so boxed in on all sides by other crops suddenly a large in the object falls

04:25

off the truck in front of you you caught stop in time to avoid the collision so it needs to make a decision you go straight and the object of this work when when is something we have so far right into it should prioritize more safety by hitting the motorcycle minimize danger to others by not swerving even if it means eating a large object in sacrificing your life or take the middle ground by hitting the SUB which has a high passenger seat so what should the self-driving car to but now there's a motor

05:00

cyclist wearing a helmet let in another 1 without home to you right which 1 should

05:05

be ruled by a car crash into the seems like with the home because she is more likely to survive and importantly lies in the responsible moderates if instead you say that like without the presence active responsibly and going beyond the initial design principles of minimizing the in the room OK so there's much longer video on youtube which you can take a look at and

05:30

that some kind of you know tells you about all of the different scenarios that a driverless car may have to face some Judith Thomson goes on to are in this very long and interesting people that walked through

05:44

the trolley problem in different ways what the trolley problem basically asks you to do is to and to actually engage with something quite complicated which is and is it OK to sacrifice 1 from Boston all 5 and a large another bystander at the switch that changes the position you are not the trolley driver but you're just a bystander and

06:06

how should you make a decision you responsible if you're just standing there and um you have the opportunity to change the course of things out what what is your responsibility um and a transplant takes it into a different realm where you have a doctor who has the opportunity to and save the lives of 5 people was is 1 young healthy person and these are kind of like really impossible poet experiments that due to Thomson other philosophers have have set up but they bring in a very interesting sort of contextual and factor into thinking about these things because you're not happening in isolation does randomly there is no track what men on on a track of these are all kind of tracks and what men and what people in very specific contexts would if you knew and personally the but 1 of the people was tied on 1 side but you did know any of the other 5 people were on the other side and the the fat man is an opportunity to push over somebody very large who may stop so that the truck the trolley and on the track of the loop in the track sort of like extends the space of the track so but all of the situation sort of forced people to think about is the outcome important or how you how you how you address the problem is that important and is letting die are the same as saving the same as killing and so this is a test that's being applied in as I said in the development of driverless cars Google BMW Tesla all of these people what's my problem with it and there are a couple of problems and the 1st is that we're talking about a situation where human beings in the near future are going to have to be in some vehicles that they share spaces with that there's I fury teslas ad copy of this a shared

08:02

spaces all fluidity and small control between humans and machines and and actually were already there every time you get on a plane you're already giving all the control to machine you have every time people go up and

08:14

rockets and you are a lot of this stuff is done by the machine itself so we wouldn't trust machines to do these things um of course in the case of planes and rockets you have a highly skilled and trained people not people like you and me what possibly not very good drivers and I've a mean excellent rivaled and and I will know how to solve all of these like complex and ethical issues and but there's a reason and there's a

08:39

history of to how we trust machines and how we think that machines don't make mistakes but human beings do and that's 1 of the biggest rationales for why we even need self-driving cars because human beings do make mistakes but the driving efficiently and cause would be accurate because they can be 0 driverless cars can be accurate because it would be programmed to be so and however I Madeleine Elisha's a scholar in the United States was done a lot of work into the history of dealing with arrow between machines and humans and finds that and In cases of accidents there is a tendency to blame humans and praise machines so the machine never really makes a mistake it's the human the makes a mistake and this is 1 of the human has to take responsibility and this is this is not a problem except that but not dealing with just a regular car I we're dealing with machines that we think of programmed in a certain way to to achieve certain functions um and they're not uh that there there is accountability which has to somehow be stretched between the human and the machine she talks about this concept of moral crumple zones like you have a crumples alone in a car accident situation and and she said that

09:54

the human being tends to get caught in this out because we just believed that you know the machine is all powerful and the thing that bothers me about it is that somehow we use this this concept of ethics and made it something that the law and in this case in particular I think the American law can deal with and I'm not so comfortable with the fact that fx becomes a question all of American tort law negligence law and the fact that you insurance companies have to know how to deal with situations all Aaron mistake and accidents in self-driving cars um and additionally that ethics is has become a problem of engineering and I'm not sure

10:35

that um we're actually at that point where we can see this we have the answers because if you look at the trolley problem I mean I'm curious why they picked this and not anything else and that's time part of my what now is to try and get to why this problem and not something else I was a just something that could be programmed easier and better and and and I've I found this score which I really like which kind of speaks to the law being able to deal with um human error and machine arrow their statistics going around that um in accidents that global self driving car has had an accident a bill of self-regard has been in and the problem has been the human beings around it and other cause the Cosby drive the driverless cars been driving around Mountain View and and it's been rear-ended a number of times but that's because

11:28

the human beings in other cause have not expected the driverless car to be so accurate we all know that when we drive we we take risks we take calculated risks their margins of error and we keep stretching that margin of error depending on the situation in the context depending on who's in the car with us if we're along and but the driverless cars program to be really precise the and so there is a whole lot of gray zone right now and an uncertainty about how you actually regulates our and monitor what what error means in humans and machines and I think the 3rd I

12:11

mean this so there's a cluster of issues around there are and how we deal with machine and human there I don't think we work that out but there's a point slightly beyond that which interests me as well which is an how does that lead us to a place of accountability mating accountability is a much bigger question then actually affixes it's been framed hero and think that there's been a lot of emphasis on machine learning algorithms and algorithms generally where many of us in this sort of space are quite familiar with some work around algorithms and discussions of algorithms but I think there's a

12:43

way in which the discussion of algorithms tends to after things and say that well that's where the conversation is the problem is the algorithm and not anything else but and I think that

12:54

actually we if you move outside of just this car you know trundling down this you know hypothetical reward to um and having to deal with these impossible situations of um of fat man and you know bystanders and if we move outside of that and go beyond and behind the algorithm to understand where did these things come from then I think that um we're kind of already in a very different sort of space are not talking about cause and trolleys anymore um and therefore have I look to the work of people like Langdon Winner out without loss more than 30 years ago to artifact of politics and he's got a really interesting body of work that looks at how and architects to construct will build a lot of infrastructure and the City of New York and how their values come into a bill of the building of architecture and technology so tunnels that connected New York to its some bulbs will build at a certain height to prevent passes from and from being able to go to kind of more affluent DB like Long Island and at that time in New York was being built there were only certain kinds of people who travel by bus they were the tended to be poor and they tended to be black and so that was kind of regulation and values kind of built into the design of an of

14:09

an architecture of the city and and I think that a similar kind of thing is happening in the context of of self-driving cars that we have to kind of look behind the algorithms a little bit and look at the networks of humans of institutions of motivations of money and and even signs and symbolism into Wyoming even going down a certain path I mean if we think about the case all Volkswagen um there was a defeat device in Volkswagen but there were people who actually program that direct defeat defy device to to work in a certain way um and the work of the umbrella torus also really interesting and that of the French philosopher Latorre and t to the same thing to think about Powell but in a way that's not just about humans but recognizes that we are embedded in all will along with and so many different artifacts and objects and surfaces interfaces that have histories and have antecedents and and to ask what is the rule and where they come from and so here there's a very interesting chapter in his book well section in his book where he talks about some um a speed bump and a stop sign approaching the school and he said the speed bump is that she just an inanimate object that's been built into the road to slow the car down and but the stop sign when you're approaching the school says um school ahead 30 miles per hour or there's nothing else that's actually making your costs low down but there's a whole bunch of reasons behind why that's fine as influence you could address the suspension of your car and just go really fast over the speed bump and you could take a moderate risk by not paying attention to the signs which the school led to a slowdown and but I mean those are just 2 examples immediately be brings it all other kinds of objects and says these objects have their histories as well and ends as I think they're kind of unpacking how these algorithms developed in the self driving cars water the sort of contexts Fu these engineers with these companies that there actually taking add the concept of ethics and saying 0 we have the on so we're going to tell you how to how to do and fix and this is how the law is going to regulate it and and this some machine that going to figure it out for us and

16:29

I think I'm going to um and by a also calling into question the problem of fix itself like why ethics become this default on what kind of ethics was epochs so from the kinds of ethical problems that the trolley is about trolley problem is about these come from kind of a content tradition from consequentialist and watch with ethics that come from a very sort of patriarchal Judeo-Christian framework and we have and fixed that come from many other different places of feminist scholars have a sort of really pointed this out and said that there are perhaps ethics that come from collectives the ethics that come from I'm from networks and from other sorts of notions should we be looking to those and so

17:12

may be affixes the wrong framework um I sort of think about them the dog um being warned in Wales and I fear for the people of Wales because I don't think they're ready for the for driverless cars of the robot apocalypse and frankly need and but I hope to I hope to be able to work it out in the next 2 years and I'm and I'm going to end with um with something that's quite familiar to you maybe people move um being to other parts of the world and encountered situations all of driving um and it's kind of funny video I suppose home host all day and the solution of

17:55

the inverse is inhibitors when he really much anywhere in the world and and I think that there's no sort of a very different kind of logic of a different kind of system driverless cars are not at least 2 of 2 units at a function in the environment I think but I think it is actually the reality of where

18:13

the majority was is and the way in which kind of logic and styles stopped working out and I don't know whether they get it yet still point where we can actually um figure out understand how the system works

18:28

out and this uh let's see let's see if we do and I'm waiting to find out so thank you very much and really happy to open this up to questions like thank you very much for this interesting speech and all the door of around 10 minutes for questions from the audience so please start there's the 1st question on at the of the yeah this in work a very interesting discussion on I think it touches on philosophy of memory deep level and I'm wondering at what point is this an issue in most cars are gyrus in which case they can interact with each other and to make these certain types of uh choices based on this network of sorts if you if you can come and

19:37

go I can actually to speculate I don't think that's so I know enough about it I would like to know more about it but I think this pretty much what you read is a combination of how what journalists in a wire to the Guardian 0 and some very sort of technical papers that are coming out I think this is a field it's being developed as well as people were in these environments can probably speak to its and a lot more but I think that it is the scale and it is only going to increase and go up and it's going to become more and more complex and so yeah

20:12

not not I I can't answer that sort of you technically but I can it terrifies me to think of how we don't know how to sort this out well as they did the some of the questions

20:26

please the but there there are lot of people you know who were also part of the mobility tracking kind of possibly work in some of these industries if you actually have an answer to the gentleman all nodes uh something more technical to come the it the homonyms Carolyn thank you for interesting lecture and maybe a suspicious about what I understood where you were hinting at is if all cars are without drivers then there's

21:03

no danger because there's no humans in the cars but then they're still pedestrians or at least mean there will be people outside sigh at and see how it actually change maybe you can explain it more when you were asking your question and

21:20

and and what I guess I was more interested in when we were in this this cease to be a problem of a highway for example were maybe there are other pedestrians in the case of a particular case of some you know where the car had had to make a choice between running into this the or a motorcycle in which case it's of say 50 feet 50 years in the future when they're all a lot of the they're all driverless cars with people in the but in this case carrots all the cars to left to right and behind this chart make a decision together in which case of this ethics issues ceases to be an issue the text for that current flows that clarification um 1st I don't know if 50 years in the future everything is going to

22:19

be automated there will be no humans and that we should do things now based on that assumption the other is that I think a lot of these systems and not at the consumer level but acts and you plot possibly at a military level a lot of these things already do talk to each other In a way and share information when and you

22:41

sometimes don't have a human in the loop as they say in you know sort of the development of cars um it's possible that at some point you will have some things where there's um no human in the loop but I think that there's always going to be human but he's not always on I think it's going to be human in the loop design a fair bit because people are recognizing that this is an issue you cannot have you cannot completely take humans out so for example uh um that Tesla definitely has this in the design of human in the loop of whereas and Google recently had some issues with the California law because they designed the cars to be completely driverless and the California law couple of months ago maybe the still was that but you cannot have a completely driverless car there has to be a human in the machine on that can override it old you know but in in some way so I don't know if we're there yet and um yes I and I'm uncertain about whether we can actually make that decision for people will be around 50 years in the future and see that you know I and that translates your comment was a great

23:51

segue into my question actually witches and injured officials and some of you have higher and my question was to what extent and can humans override the decision-making of the machine do we have that feature no or not to mention certain designs teslas Google have the human in the loop but when and how can human override machine decision making I think there are some instances

24:18

where they can I unfortunately I don't know the details I would love to anyone from Tesla's here and wants me to he wants to give me some time in your lab what to talk to scientists and really happy to of um I don't know but there may be people in the room will no more specifically and where and how and in what instances I'm trying to gain as much as I can from the research papers that I'm not an engineer so in the the things and I'm

24:50

wondering so this question about the trolley problem seems to be really the most popular question to ask a region about Sept having ties and I was money you where slightly healing at that if what you think or maybe have some ideas about this what are the questions could be discussed and about upcoming developments where maybe questions we should discuss instead DSM suggestion just to freshen things up and thank you out the ah well actually I mean I think toward the end of this stuff I talked about in terms of accountability but

25:25

more the waters of context an environments in which this stuff is being developed it could be self-driving car today it's something else tomorrow I mean there's all kinds of victual very useful but also utterly ludicrous technology that's being developed and I'm personally I think more interested in questions around where this is coming from I was making the decisions where's the money coming from and where is it going I think there is an entire culture in silicon valley is a myth quick easy to pinpoint is 1 place um but I think there's so many streams of power influence which and are invisible to us right now and well some of which are invisible the and for me I think that's kind of interesting question of the problem with the trolley thing is that it really narrows it all down to a car driving on a road in a certain way and whether it will have an accident or not but I think that's uh the I would find and it also this sort of development of cause 2 words uh see for more efficient driving electric cars I mean I know that there's a lot of twinning of those things as well as a so I think that maybe that's more useful technology mean to select dependence on oil generally is problematic it's not a cop problem specifically but maybe getting human beings out of cause a reliance on because it's sort of like more important issues and we have that was they're more interested in whether the algorithms come from and who makes the decisions and then that's it was a

26:57

she my question did you get to talk to develop 1st lake holiday doesn't work with the background of their philosophers on that you more about about are these decisions made um I want to do some ethnography in with you know high tech production cultures and I think that's that's definitely

27:18

where I want to go I think those are the lot of unanswered questions there so after I do that and it may be like 2 years time Republic all come back and talk about it then but I don't know at this point the and I get the Christmas so and from the

27:35

that is the and there are more mn and serious issues and the term so I mean like if it takes for example draws in our war

27:46

conflicts for me so much more severe should and concern ethical over but as text for it I mean I sure that demands of point of view that I don't see much problem reserve self-driving cars like and I mean come on how often do you see this trial problem on the roads and highways I know but for me this this risk area when you and when we make Mike for now non offers a humus watcher miss emotionless draws and they just fly to the so we legendary Q just everyone for me this is like insanity vicious agree you should constrain them on these kind of that problems yeah no I'm I agree I think that that's very

28:27

problematic and there is sort of a continuum between the drones in the sky and I think there's another panel going on I was just happened around around this exact in as talking to some friends were on the panel and and 1 of the things we talked about in which I think I forgot to mention in my talk was that but it's not just a question of the law engineering being able to figure this out but um what what is licensing or accreditation look like and I think that's another sort of interesting question that there is a

28:55

kind of inevitability to increasing human shared spaces of human machine the interaction and different levels of autonomy but we perhaps have to think about not just

29:07

regulation in terms of how insurance payouts will happen but what is what is it mean to actually engage in that space and 2 and a credit line and licensing use things like um use things like drones and to use things like uh driverless cars I mean they can be a lot of benefits of these things and the ordered people with reduced mobility and there's a lot of people who have great rationales for driverless cars but there's too much of a focus on these sorts of problems which actually I think do happen when we drive cars we constantly have to make split-second decisions but and and so I think that and that's I think that there is something still more to be said about kind of of working through these problems I don't think that and I think the same as for things like drones as well I don't think that the that separate actually and there's a lot of sharing of technology Jennifer of course but actually quite connected and maybe some people in this room is current and recent report this

30:07

week that and some news around how violent extremist group is actually figuring out how driverless car technology to use a driverless car as a weapon process Viet I saw that the link that you sent me and I mean but there's also sort of like a tipping point a terrorist is a terrorist and will use you know pretty much anything that they want to be violent so you can't also vilifying and you know the driverless car them electors will pick up

30:36

anything so it I did I can't respond to that so actually but I think the question of drones driverless cars autonomy is is the bigger issue on yeah OK thank you very much thank you put the 2 of them

30:52

to the audience to and you know you have a small break and then the continual at 450 the freeze speakers from H R S innovation happen and to earlier I hope to see you again without a head

Inhaltliche Metadaten

This talk will present an overview of how the commercial development of self driving cars is significantly shaping conceptions of ethics in data societies, and what this means for an understanding of human and machine interactions, intelligence and autonomy.