Saturday, June 20, 2015

Noel Sharkey's Mistake

I heard a radio piece about a new "emotionally aware" robot being introduced in Japan. The interviewer talked to
Noel Sharkey, a UK computer scientist.

Sharkey cast doubt on the "emotionally aware" claim. The robot is not really emotionally aware, Sharkey claimed; rather, it's just using an algorithm to detect emotion in humans and respond appropriately.

I wonder just how Prof. Sharkey thinks people detect emotion in other humans? If not an algorithm, what does he think is going on in the brain? Is it magic?

I'm always amused to hear people -- even computer experts like Sharkey -- complain that computers can't "really" do something, whether it be compose music, or write poetry, or detect emotion. But these naysayers never seem to explain what it would mean to "really" do these things. It's like complaining that airplanes don't "really" fly because they don't flap their wings the way birds do.

9 comments:

That's very silly indeed. Dutch philosopher Herman Philipse in God in the Age of Science makes this very clear: when saying X loves Y the latter can only know this due to language (words), body language, facial expression and behaviour. What else can this be than algorhythmic?

Sociopaths are very good at pretending to be emotionally aware. So much so that they have been known to fool close family. What is this other than the use of an algorithm. It might even be said that sociopaths are superior to the "normal" people in that they can recognize and "use" this algorithm to their benefit.

@ Acartio Tonsa:You're making the same mistake Jeff pointed out that Sharkey is making. Sociopaths do not feel empathy for people (I highly doubt computers do either). They might be good at faking that, but if they can accurately determine the emotions of other people, what is that if not emotional awareness?

I would be skeptical that this robot is really very good at detecting emotions, mainly because it sounds like a hard problem (but there have been a lot of advances in image analysis and face recognition, so maybe).

I think there is a distinction between having an empathetic awareness of emotion, where you feel the same thing yourself, and classifying an emotion. I could say, for instance, that I could probably detect whether some people were enjoying a sumo wrestling match, but might never really feel that way, and would even struggle for an analogy. After some experience I could say: they have that look, there must be a sumo wrestling match coming up, but still not really "get it."

Likewise, it seems reasonable that a sociopath can differ from others in having a keen analytical understanding of emotions that they themselves do not feel.

It's all algorithms, sure, but I think human algorithms tend to be very different, and often involve directly modeling to an extent that computer algorithms do not. E.g., I believe that if I understand that someone is feeling grief, I might do this by triggering related regions of the brain and even adopting the same facial expression (at least I think there is MRI evidence that memory recall works like this). In order to get a computer to do that, you would first need to give it the ability to have the emotions it is detecting.

Maybe this sounds like splitting hairs, but I think this is a significant distinction with ethical consequences. If you had an 100% accurate human emotion detector, there would be no problem using it as needed or turning it on and off. If you really had a conscious computer capable of feeling the same emotions, then it would need to be given the same rights as any other sentient being.

How can a computer be emotionally aware when it is not self-aware? It is not conscious, it has no sense of "self" with which to be empathetic. All it is doing is mathematics - comparing word or body movements lists against lists that were programmed into it.

It might produce a response that is a simulacrum of a real person's response, but it experiences nothing emotional or empathetic. I suppose one might as well argue that a sphygmomanometer is emotionally aware as it tracks a human's emotional blood pressure response.

Perhaps by "really" he means that the algorithm used by humans is orders of magnitude more "sophisticated" than that used by a silicon - based computer. In the absence of full understanding (and reproducing) human emotions from a scientific point of view, one should be allowed to use metaphors. Perhaps. Of course, I see your point of view.

And I would argue that a sphygmomanometer is, indeed, partially emotionally aware. Your mistake is to think that "emotional awareness" is a black-and-white property; either something or someone is emotionally aware or it is not. I would argue instead that there are degrees of emotional awareness, and it is perfectly reasonable to say that a sphygmomanometer is very slightly emotionally aware and the Japanese robots somewhat more so.

But a sphygmomanometer is merely a complicated lever, an auto pen which transduces pressure into a number. It is only a human, such as yourself, who interprets any changes in the pen's output as having emotional meaning or intent, not the device itself. Imagine the sphygmomanometer observing a tree falling in the forest...

As I understand it, human emotional processing is dependent on the idea of empathy - the ability to imagine how oneself would feel if one was in the same situation as the observee. The computer can not do this - it has no sense of self with which to compare, and it can not actually feel or experience any real emotion. A lie detector measures physiological responses, but it can not experience emotional stress itself. A well-programmed computer can give a convincing simalacrum of the emotional responses of a human (its programmer), but without actual emotions, can it truly be said to be emotionally-aware?

What are "actual emotions"? How could we test to see if something or someone has them or doesn't have them? Until we get a more rigorous definition of what we are talking about, I don't see how to proceed.

How do you know computers cannot have a "sense of self"? For example, suppose we give a computer a camera and the ability to inspect its surroundings, including itself. Would it then not have a sense of self?