Comments

Welcome to Self Help for Robots. I'm your host, C.J. Pitchford, and this episode, “Robots have feelings 2,” will explore what a model of emotions might be useful for. But, before we begin, let’s define emotions as a holistic interpretation of a mental state. When talking about non-robots, we could be referring to ‘hunger’ or ‘pain’ but I hope that’s not the case for robots who lack the biological mechanisms.

As I mentioned last week, a lot of the research into emotions is based on an incomplete understanding of the physiological sources—without even a comprehension of what the emotions might mean.

I mean, If any emotion could be a goal for a robot or non-robot, I would hazard a guess that “joy” would rank right up there!

It’s better than “meh!” And if those two emotions could be related, after research by Robert Plutchik, you can say that understanding the difference between the two is relative and subjective, but still possibly quantifiable.

Now if we can learn, like in the last episode, a way to model a simple emotion like “I am OK” along three axes, then we can extend that idea beyond mere identification of emotions, but how they are transformed—possibly, how they can be manipulated—and that has serious implications.

On the website for this podcast, you can read more, at selfhelp4robots.com where I wrote about defining simple emotions. David Anderson from Caltech, proposes that even insects have emotion-like states or what he called “emotion primitives,” so why not robots?

In defining emotions, I’m clear that I’m separating feelings from behaviors. Many models I’ve seen—including one that I’ve mentioned many times previously, the Pleasure-Arousal-Dominance Model—are used incorrectly, confusing emotion with behavior. That’s not the only confusion inherent in a model where each so-called measurable axis is not independent from the others.

In the Orthographic Model of Emotions in the last episode, an operative hypothesis means that a model can be both descriptive and predictive, and in this case, let’s look at the feeling of joy that can be found in a relationship, and how feelings of trust can be generated by a transformation of joy over the domains of inner and outer self, as well as linking events between the present and the future.

To make that less abstract, let’s consider a robot or non-robot, just, you know, robotically giving flowers to a potential companion or partner? Would a robot mimic an old tradition without a purpose? Could a robot understand the concept that’s behind the feelings associated with the act?

Without a model to work from, if it seems that big data and artificial intelligence are mining mountains of data for nuggets of truth, that’s because that’s exactly what is going on.

But, what if we can find a mathematical model for that little nugget and use the predictive and descriptive properties of the model to jump to the head of the line and skip the mining of whole mountains?

So, If a robot—or, non-robot—wishes to establish trust in a relationship, it needs to understand what trust is. Think of how incredibly different one statement, “I trust you,” and its implications when compared with “I trusted you!”

By examining the emotion of trust as a positive feeling, on the axis of gain. An emotion that embodies a connection outside of oneself, which we can measure on the axis of domain.

But what of the sense of time on the event axis, where the feeling of trust exists in a context of the present, but may be based on expectations. Put another way, trust may reside in the present, but is based on judgments of the past and anticipation of the future.

We can make predictions using a model! Expecting a “future” event on an axis of events, we might see in a model that the positive feelings associated with another emotion experienced in the present, joy, can be transformed by repeating the joy over time.

I look forward to the next episode, and I hope you do too, that is, we'll continue this in the next episode, "Robots can’t feel what I feel“