It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would
be beneficial for a machine to interact with a human world.

Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.

That would be an automated response. I have voicemail. It sounds like me. I may sound happy on it. Does that make my voicemail machine happy? No.

It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would
be beneficial for a machine to interact with a human world.

That would be simulated emotions. Look at my point about my voicemail.

Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.

Emotions wouldn't be needed. Protocols would. Example;

P1, If weapons, Kill bad guys unless P2

P2, Don't kill kids unless P3

P3, Weapons aimed at you=bad guys. See P1

That wouldn't make the machine happy/sad/angry etc. It would be following protocols.

But, just like intelligence would be artificial. It just behooves me to think that machines wouldn't learn emotions and use them to their advantage
when interacting with humans. That's all I'm saying. Simulated or whether they really feel them is not the point, the point is emotions would very
well be advantageous to artificial intelligence.

If you are talking about artificial intelligence in the realm of war then why would you want emotions, yes. But why would you want artificially
intelligent weapons of war? They very well could learn to a point where it saw it's creators as threat and kill all of us.

But there are definitely scenarios were emotions in AI would be beneficial as I mentioned before therapy robots for one.

It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would
be beneficial for a machine to interact with a human world.

Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.

You see here is the problem - like consciousness - emotions are not fully definable or understandable - Man can still not define exactly what
consciousness is, same with emotions.

We know how a super chess playing machine will function - same with a military programmed killer drone and robots of the future - that is relatively
easy to understand - And progams are already being developed for that. But how would you program anything life Human consciousnes and/or emotions into
a machine when you do not know exactly what they are ?

I've gotten into other debates about conscious machines of the future, and it probably wouldn't surpirse you how many people, in spite of the
advancing science, say it will never happen - but they, you, all of us, are not sure and can not define in an absolute since exactly what
consciousness is.

The nightmane scenario is this - A super computer possessing all of the cognitive functions of a Human except true consciousness [whatever that is]
and of course lacking Human feelings and emotions, could take control of Man by its programmed gaming capacity and Man would lose - His very emotions
becoming a hindrance to effective action - The machine will think faster and will not hesitate to act - pure calculating intelligence unfretted by
Human feelings and emotions.
And it will win

Can they build a fail-safe into the advancing AI now before it is too late? Many computer scientists and people in business such as Bill Gates and
Musk are already warning about the dangers.

"A lot of movies about artificial intelligence envision that AI's will be very intelligent but missing some key emotional qualities of humans and
therefore turn out to be very dangerous."
-Ray Kurzweil

Kurzwil though is optimistic - He thinks there will be a happy merging of Man and machine and we will have a better world.
But I call myself a 'Sciencefictionalist" someone who projects future scemarios that may become, and in a since are already becoming the future - In
this case the nightmare scenarios of science fiction can not be ignored - In the sci-fi world Man usually triumphs in the end - Would you bet your
life and your future on Man being able to control the Pandora's Box being opened with advancing AI

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real
emotion to more effectively do what it was set out to do.

There would be constants. Such as with humans, our brains don't ever stop our heart, keep us from breathing, etc. There are certain functions that our
bodies do to keep us alive.

I'm not quite sure how that dismisses the point I made.

originally posted by: gpols
a reply to: Ghost147
What would you program into a machine to keep it from destroying it's kind? A machine programmed to kill would kill indiscriminately if a machines
communication got damaged in a battle and unable to update with the rest of the cluster what would cause the other machines to keep from destroying
the malfunctioning machine?

As TerryDon79 already mentioned. a "do not kill your own kind" line of code is not equivalent to emotion.

None of the emotions you listed are even remotely similar to a computer's diagnostics report.

Emotion is any relatively brief conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure. The
diagnostics report you gave as an example does not cause the machine any distress at a mental level.

originally posted by: gpols
a reply to: Ghost147
It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would
be beneficial for a machine to interact with a human world.

You're confusing beneficial to the robot with beneficial for humans exclusively. A robot therapist would suffer from emotions, but if it could portray
emotions (and not actually feel them) that would be beneficial, but to the subject themselves, not to the robot.

So again. What benefit would a robot get from emotions?

originally posted by: gpols
a reply to: Ghost147
Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.

A computer doesn't need emotions to not kill everything, all it needs is a line of code that prevents if from killing everything.

originally posted by: gpols
a reply to: TerryDon79
the point is emotions would very well be advantageous to artificial intelligence.

You have yet to provide an example where emotions would be advantageous to the machine with AI, not the people the AI interacts with.

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real
emotion to more effectively do what it was set out to do.

But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn
something we don't want it to.

So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like
that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.

Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real
emotion to more effectively do what it was set out to do.

But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn
something we don't want it to.

But they can program it to program itself - like with IBM's Watson pluged into the internet and beating the best game players in the world on Jeopardy
- A machine of the future programmed to learn and having access to the web will be able to........

Better still what will it not be able to learn or do

Controling and/or eliminating biological life might just be stage one of whatever agenda its calculating [thinking] lead it to.

So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like
that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.

Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?

How could it eventually teach itself anything if we didn't tell it to? If it's not in its program mining it can't do it. It's that simple.

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real
emotion to more effectively do what it was set out to do.

But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn
something we don't want it to.

But they can program it to program itself - like with IBM's Watson pluged into the internet and beating the best game players in the world on Jeopardy
- A machine of the future progarmed to learn and having access to the web will be able to........

Better still what will it not be able to learn or do

Controling and/or eliminating biological life might just be stage one of whatever agenda its calculating [thinking] lead it to.

See my above post about programming.

You and gpols seem fixated on the fact an AI could program itself. It wouldn't be able to if the ability wasn't already programmed into it to learn.

There's robots that can learn your environment (inside of your house) by using sensors. But it doesn't want to learn to do anything. It's programmed
to do its job.

If an AI wanted to learn anything then a, it would have to be programmed to want outside of its program or b, been programmed to learn certain things.

So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like
that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.

Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?

How could it eventually teach itself anything if we didn't tell it to? If it's not in its program mining it can't do it. It's that simple.

If it really is "True AI", it should be able to learn and adapt to anything. However, it would be intelligent enough to realize the massive downfalls
'Emotion' intrinsically has. It's basically the anti-logic. So I don't know why it would ever want to learn it at all in the first place

This content community relies on user-generated content from our member contributors. The opinions of our members are not those of site ownership who maintains strict editorial agnosticism and simply provides a collaborative venue for free expression.