Classic Boston Dynamics

Boston Dynamics

No doubt, any internet dweller has come across a Boston Dynamics viral video at this point. The Massachusetts based robotics firm displays its many robots doing many things. Most notably: falling down or getting duped into doing something clumsy. Of course, sometimes things don’t always go according to plan.

At a demo for the Congress of Future Science and Technology Leaders conference, Boston Dynamics debuted Atlas. Billed as the world’s “most dynamic humanoid”, Atlas performed a series of tricks and tasks. He even endured taunts from Spot Mini–another robot in the demo. Everything was fine . . . until he encountered a projector located on the floor.

Atlas took an unfortunate fall after finishing the demo. To complete the very human robotic experience, Spot Mini jaunted offstage in one final taunt. Luckily, Atlas emerged relatively unscathed and, so far as we know, he doesn’t have an ego to bruise. But this raises the question: should we feel bad for Atlas?

Fears of Skynet, I,Robot, & More

Plenty of people watch Boston Dynamics videos where they actively trick robots into falling over or stumbling and wince.

Of course, these are agility, stability, or dexterity tests.

Despite this knowledge, some in the tech community harbor fears of robot mistreatment having calamitous effects. Even FinTech giants like Elon Musk have expressed deep fears about AI taking over.

“Robot Bullying” is the key factor in the argument for empathy for our circuit board brethren. On the other side of things, you have satirical sites such as Stop Robot Abuse that showcases many Boston Dynamics robots enduring “bullying”. The site then calls viewers to “stop actual animal abuse”.

Animation: Moonchay | Boston Dynamics

That’s definitely a worthy cause, and it suggests that the site doesn’t view the robots as being abused.

It also reinforces the question: should bullying something that isn’t technically alive stir waves of empathy in humans?

Think about a movie like Marley and Me or Iron Giant. You establish an emotional connection to this non-human character through other humans and begin to understand the non-human character.

Would better understanding robots like Atlas mitigate the possibility of a robot uprising? Do they even have a perspective to understand? AIs do, but robots are another story.

Little Robots; Big Questions

The latest video from Boston Dynamics doesn’t disappoint either. This poor robot just can’t quite get the act of stacking down. But does the robot need reassurance? Scolding? Maybe some kind of robot cookie? Maybe robots will become like Bender from Futurama or the Tachikomas in Ghost in the Shell and crave certain kinds of oil. Either way, Boston Dynamics is likely to continue these tests with robots regardless of our empathy question.

Should we feel bad for Atlas and other robots used in tests and demonstrations? How can we use those emotional responses in robotics and AI development?