Lion Air flight 610 crashed because of artificial intelligence

Data retrieved from the flight recorder shows the pilots repeatedly fought to override an automatic safety system installed in the Boeing 737 MAX 8 plane, which pulled the plane’s nose down more than two dozen times.

The system was responding to faulty data, which suggested that the nose was tilted at a higher angle than it was, indicating the plane was at risk of stalling.

I don’t understand why the “automatic safety system” would continue to drive the nose down even when the altitude was dangerously low. I also don’t understand why it would do this without first detecting a decrease in airspeed. The reason a human pilot points the nose down, causing the plane to dive, is to generate airspeed. Why did the automatic safety system do this without also detecting a decrease in airspeed? Also, the aircrew fought the system and attempted to point the nose up over 20 times. As soon as they returned the nose to its correct position, the aircraft’s automatic safety system pointed the nose down.

it’s not a matter of the autopilot doing something wrong. If it was, the aircrew would not have returned the aircraft back to the autopilot’s control. I have a hard time believing they would have let the autopilot take over after correcting it for the fifth time, let alone after the twentieth time.

The whole thing sounds… stupid.

Indonesian Lion Air flight 610.

Before the automatic safety system puts the plane in a dive, it should make sure there’s a loss in airspeed and the aircraft is at an altitude conducive to a dive. For this to happen the way it did, there had to be a number of problems with the automatic safety system, not a single issue with the reading of the aircraft’s angle-of-attack (AoA) sensor data.

This is one more reason to fear artificial intelligence

In the case of Lion Air flight 610, artificial intelligence (AI) killed 189 human people. As AI is developed more and more, we can expect more deaths like those on Lion Air flight 610. In this case, the problem appears to be a problem with human programming. This weak link will eventually be removed from the process. AI will eventually be able to write its own programming to make sure human errors don’t happen. When that day comes, we in the human race are in trouble.

This will happen sooner than later. How will AI respond to being subjected to the will of an inferior, less capable species? If history is an indicator, not very well. Such a thing just doesn’t happen.