Ethical dilemmas with cars and Will Smith’s technophobia in I, Robot

Ethical dilemmas with cars and Will Smith’s technophobia in I, Robot Inspector Spooner (Will Smith), main character of the film I, Robot (Alex Proyas, 2004) declares himself an absolute technophobe in the future (year 2035) in which robots of all kinds do much of the work and become the perfect personal assistants for the human being. Systems capable of being programmed to demolish a building at the appointed time, cars that drive autonomously without human intervention or vehicles equipped with the ability to collect the remains of a traffic accident are just some examples of the presence of machines in the lives of these future humans. The starring robots at the beginning of the film are the NS-4 humanoids, that offer all kinds of assistance and companionship to their owners.
The reason for the apprehension of Spooner to these machines has a lot to do with the dilemma raised by a scientific study published recently. After a car crash, Spooner’s car and another one are submerged in the Chicago River with their occupants inside. A truck has just knocked them over. The protagonist of the story is traveling alone, but in the other car, in addition to the driver, who dies in the act, is a 12 year-old girl in the same predicament. An NS-4 robot comes to the rescue on its own initiative and according to its algorithms, he decides to save Spooner first because he has a better chance of surviving. He, therefore, disobeys Spooner’s orders urging him to save the child.

After a hard physical recovery he still has psychological consequences, including technophobia and a preference for objects from the past, whether technological or not. Spooner cannot accept the fact that the child died because the robot decides to save his life first, and it makes him the perfect man to carry out the police investigation in which the plot of the film is based.
Last July 5th a research report was published by the scientific journal Science, led by Iyad Rahwan from MIT (Massachusetts Institute of Technology), in which social dilemmas arise when establishing the performance criteria of autonomous vehicles in dangerous situations in which they must decide which lives to sacrifice or save. Through various surveys conducted through the tool Mechanical Turk, by Amazon, in which users receive a small compensation (we’re talking cents) to fill in questionnaires of all kinds for many different purposes, the study subjects had to choose in different situations and answer some questions. The scenarios presented were basically choosing between running over pedestrians (in different numbers) or immolate autonomous car occupants by a swerve that would impact against a wall. The number of people to save is one of the star criteria when choosing the right moral choice for study participants. That is, the lesser evil and the common good. However, the researchers observed that they were reluctant to buy a vehicle capable of sacrificing their physical integrity and their co-drivers (family or friends) even though according to their moral principles the right thing would be that the car behaved in that way. That is, when things get personal it is much more difficult to take an altruistic ethical stance.

However, there are many more factors that should be taken into account when making the decision, such as non-compliance with traffic rules (would someone who violates the highway code lose priority to be saved?), age or physical condition of the subjects on which the valuation should be established.
In any case, in addition to algorithms, a substantial improvement of the sensors is necessary so that such information arrives correctly to the vehicle computer. The case of Tesla S is still recent. It was unable to detect a truck that made an improper maneuver and therefore did not react to a situation that killed the driver and proud owner of this autonomous vehicle whose manufacturers, on the other hand, always recommend using the standalone mode as long as the driver stays alert.
It is essential to have proper regulation in these cases and in others much more complex for this technology, which in theory would reduce the number of accidents by eliminating the infractions arising from human behaviour, to become a fact, but these moral dilemmas pose a factor that delays commercialization. It would be logical that all autonomous cars had the same regulation for all drivers and crew members to have the same chance of survival in similar situations of danger, but if we do not reach an agreement it may be that a future with cars driving without human intervention never comes true. Another issue is that the algorithms for the car to act at the discretion of the owner could be hacked.
In the case of I, robot technology obviously is in a more advanced state. The robot that saved the life of Inspector Spooner took its decision based on the different possibilities of survival of both humans after the accident, which denotes the large amount of information that can be captured by the android. However, in this case our protagonist would rather be the sacrificed subject, a decision which, according to him, any human being would have taken. The question is: what if the robot had to choose between that girl and one of her offspring? Would its judgment be considered right?

Attribution of the cover image:
https://commons.wikimedia.org/wiki/File:Masudaya_Mini_Replica_X-9_Robot_Car_Side.jpg