The monkeys know all.

Tag Archives: robots

According to Lance Gharavi, an associate professor of theater at Arizona State University, the question of free will rapidly resolves into a problem of desire. Steering the conversation into philosophical terrain, he observed that we can’t even say definitely whether humans have free will. But, he continued, if a robot has desires, even if those desires just involve the need to appropriately serve its master, then it can suffer. And if it can suffer, we have an ethical responsibility toward it. For Hartzog, on the other hand, the ethical stakes of human-like robots have more to do with the ways that we relate to humans.

In a Twitter inspired story, CNN asks, “is it cruel to kick a robot dog?”

In what has become a become a regular demo by Boston Dynamics, engineers kicked the Spot the Robot Dog in order to push it off-balance and watch it stabilize itself. This time (and probably every time as well) number of people on Twitter said, “that poor robot!”, and so CNN rounded up the tweets and asked retired AI and robotics professor, Noel Sharkey about the ethics of kicking robots, who said, “The only way it’s unethical is if the robot could feel pain.” He then followed it up with the warning that because humans anthropomorphize things, we may become more likely to abuse things that can experience pain, because we’re used to it. He drew comparison to the philosophers who argued animals were “clockwork” (I’m not familiar with anyone using that term, but certainly the comment that animals are lesser than humans because they are soulless has been around for thousands of years), but none the less argued that animals should not be abused because it debased the abuser.

His comment that it wasn’t unethical because animals did not feel pain, got me thinking. Yes this is certainly true, that the robot does not have any sensors to indicate damage with respect to these kicks, and it does nothing other than regain its balance, but the idea that “It’s cool, it doesn’t feel pain,” strikes me as just a variation of the old thinking machine conundrum. We say computers don’t think, because we completely understand the rules that govern its behavior. We say robots don’t feel pain, because they’re not alive. But it seems to me, that with pain, we have a simpler Chinese Room. I know that when I’ve run computer programs integrated computer programs that were suffering from some sort of system fault and logging errors continuously as “being in pain.” No the programs weren’t alive, but what is pain other than a signal indicating damage or negative reinforcement? Certainly error counters and exceptions do that. In a sense, that’s pain, or at least a reasonable functional facsimile there of.

So is it wrong to kick Spot? I’m thinking it’s not, but at the same time, if you’re do it too much, and for enjoyment, maybe it is. Perhaps that’s not very satisfying, but isn’t it often the case, that the motivations of the actor the determining factor when deciding if something is moral or not?

Dutch designer and V2_ collaborator, Anouk Wipprecht and Austrian hacker Daniel Schatzmayr (thingiversetwitter) dress features a hexpod perched around the shoulders of wearer, or perhaps it’s a dress with tripod epaulets. Normally the legs simply slowly wave, but when something triggers the proximity (sonar?) sensors, the legs suddenly pull in tight, as if the dress has become scared.

I’ve been strongly considering making a barbot (a.k.a. a drinkbot), even thought don’t usually drink at home. I haven’t given much thought to its cosmetics, instead I’ve been focusing on mechanics of the bot. I figure, the mechanics will dictate the form, and if one sprinkles enough LEDs on it, can look look fine.