It appears this robot was actually fully autonomous, if still very primitive. The fact that it didn’t detect the child, even after running the child over and would have proceeded to run over it with another wheel is worrisome.

This whole scenario reminds me of the traffic light sensors installed in many states that ensure traffic lights change when there are vehicles waiting. Except that they don’t work for all vehicles. I ride a motorcycle and I know for a fact that there have been lights that I’ve sat at until a car has pulled up behind me, or some that I’ve had to make a right turn and make a u-turn and then another right turn to continue through the light.

We are on weak ground at the moment for automation, AI, and it seems automated traffic signals.

Testing in crucial for any piece of software. If you automate file transformation you test it.

However; there are no standards of what an automated robot should be able to do. Companies are testing these robots, closing one eye, spitting on them and saying , “yeah, it is ready.” Who knows what they have tested properly?

The problem with the toddler getting run over by a autonomous robot and the traffic light not working for motorcycles is the same problem. The companies tested for the common case, adults in the case of the autonomous robot that ran over a toddler, cars and trucks in the case of the automatic traffic light, and the exceptional but normal cases they did not test for, smaller humans and small vehicles such as motorcycles in the case of the automated traffic lights.

Not only that, but it is unclear who is responsible if say this robot killed some toddler. Or if the traffic light caused a biker to be killed for that matter.

Is the coder responsible? Is it the company that made the device? The company that owns and operates the device? Is it the device itself? A combination of all of them (that seems to be the tactic of the lawyers these days)?

The lack of government standards or laws controlling the creation of automation technology is a huge problem just as it will be a huge problem when AI or the codops (Computerized Doppelgangers) come in to existence.

The time to write intelligent, coherent, and effective legislation that will prevent fatalities in these cases of simple automation is already passed. Now, we need legislation to prevent future injuries or fatalities.

The major problem is who can create such legislation? Certainly not the sitting congress that are in many cases barely literate, corporate controlled, and unable to create functional laws unless motivated by self-protection of their interests.