The creativity and intuition required for Go is based on a finite set of specific positions and moves of the pieces. A computer can just crunch the numbers and make a prediction based on this large, yet finite, set thus simulating what appears to be intuition and creativity.

This works with games like chess, but as Bristollad and I have tried to explain this is not possible with Go. Though the possibilities are still finite in Go, the range is still far too large.

I am talking about Artificial INTELLIGENCE, you are defining intelligence merely as problem solving.

Intelligence is defined as general cognitive problem-solving skills. AlphaGo is simply a rather narrow (not AGI) intelligence. Many AI researchers believe that an artificial general intelligence, comparable to humans in the capacity to accomplish as wide a range of goals, may be developed within a couple of decades.

I have given examples of intelligence beyond the bounds of problem solving, something that computers are completely incapable of right now.

Your point is that problem solving is not intellegence?

Creativity is based on thinking outside of set boundaries and it is emotions that allow us to transcend the boundaries of reason.

There's nothing magic about creativity, and irrationality could be due to brain damage rather than emotional maladaptation.

No, my point is that problem solving is only one small aspect of intelligence.

There's nothing magic about creativity...

Didn't say there was.

...and irrationality could be due to brain damage rather than emotional maladaptation.

Yes, but this doesn't change anything that I said.

"My religion is not deceiving myself."Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

my point is that problem solving is only one small aspect of intelligence.

AlphaGo or DeepMind has what we might say is a very narrow intelligence.

... sometimes one needs an intense emotional reaction (outrage, compassion, desire) to spark an intelligent response, to get one motivated to think about solutions.

So you seem to accept that DeepMind is some sort of limited intelligence that solves problems of a very limited range. Obviously, it doesn't require emotions to function or "spark an intelligent response." You say that emotions are sometimes needed to spark intelligence, however. What is it about these 'sometimes' that require emotion?

Really? You cannot think of any examples from your life? I am sure you can!

"My religion is not deceiving myself."Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

I remember a conversation between a Tulku and Terton decades ago. One mentioned Namkhai Norbu talking about all the world systems where Dzogchen was taught. The terton stated teaching AI or computer based intelligence was problematic because they found it near impossible to see the nature of mind.

I don't think you can call something without agency 'an intelligence'.

Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

Really? You cannot think of any examples from your life? I am sure you can!

Where an intense emotional reaction is required to think about solutions to a problem? That happens, but I don’t believe the intense emotion is required, and in fact may interfere with solving the problem.

It’s curious that a Buddhist of all people would fail to see the value of equanimity.

I don't think you can call something without agency 'an intelligence'.

Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

It’s curious that a Buddhist of all people would fail to see the value of equanimity.

It is curious that a non-Buddhist would fail to see the value of emotion.

You also forget that I am a Vajrayana Buddhist. In Vajrayana, emotions are not our enemies.

"My religion is not deceiving myself."Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

I don't think you can call something without agency 'an intelligence'.

Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

AFAIK it was not spontaneous, rather the AI was already programmed to negotiate a new "language" if it was more efficient that natural language.

Perhaps spontaneous wasn't the right word, and it is true that the language came out of increasing the efficiency of the task. In any case, with advanced AI still clearly in its infancy, it is remarkable to behold its present capacity for it to somewhat take on a life of its own in ways that we neither expect nor can truly explain.

You still haven't explained why intense emotion may be required for problem-solving, if you're claiming this.

Yes I have.

"My religion is not deceiving myself."Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

You still haven't explained why intense emotion may be required for problem-solving, if you're claiming this.

Yes I have.

With this?

Grigoris wrote:Emotions, the randomising effect on logic, are a positive evolutionary trait because they allow for reactions that may appear illogical, but ultimately may lead to survival. It is because of this ability to randomise that humans have been able to survive, because of the ability to innovate. A computer, for example, will not get bored and so they will not look for an avenue of change/escape and thus will not be open to innovation. Intelligence needs emotion.

I thought we both dismissed this idea, but I'll revisit it assuming you still believe it merits attention.

In the DeepMind breakout example that I posted earlier, the AI begins learning by performing random actions. Eventually, using reinforced learning, it learns which actions lead to results that accomplish its goal of a high score. Indeed, the AI will never get bored and will keep playing until it reaches its goal. Most people would probably get bored or discouraged and quit before reaching their full potential.

In the breakout game the AI began playing worse than most people would but quickly played better than any human.

The idea that emotions are functional evolutionarily because of a randomizing effect is odd in itself, btw.

Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

AFAIK it was not spontaneous, rather the AI was already programmed to negotiate a new "language" if it was more efficient that natural language.

Perhaps spontaneous wasn't the right word, and it is true that the language came out of increasing the efficiency of the task. In any case, with advanced AI still clearly in its infancy, it is remarkable to behold its present capacity for it to somewhat take on a life of its own in ways that we neither expect nor can truly explain.

I didn't see any evidence that no one could explain the Facebook AI thing, it seemed like a fairly simple explanation actually. I have no background in AI, but enough of a background in programming that personally, I found nothing at all remarkable about the story, other than that the "language" the AI's invented to speak to each other was pretty funny. Again, they had programmed the AI to come up with it's own language syntax, and English was inefficient to the task. If you broke down what the AI's were saying mathematically (deciding who gets what,if I recall), it was not that surprising. Even with my limited programming knowledge I can conceptualize a series of if>then statements and similar that would enable something roughly like that to happen. It was an interesting story, but I didn't see why it got the press it did, nor what was supposed to be so mysterious about it.

To me it was less a story about AI gaining sentience and more a story about the complexity of our tools making for amusing (and sometimes scary) anecdotes.

it would be interesting to know what the limitations and parameters were on how it used language.

"it must be coming from the mouthy mastermind of raunchy rapper, Johnny Dangerous”

(Actually, he probably said "Wu," which is the Chinese for Mu, a Japanese word.

Mu is usually translated "no," although the late Robert Aitken Roshi said its meaning is closer to "does not have." Zen originated in China, where it is called "Chan." But because western Zen has been largely shaped by Japanese teachers, we in the West tend to use Japanese names and terms.)

In the DeepMind breakout example that I posted earlier, the AI begins learning by performing random actions. Eventually, using reinforced learning, it learns which actions lead to results that accomplish its goal of a high score. Indeed, the AI will never get bored and will keep playing until it reaches its goal. Most people would probably get bored or discouraged and quit before reaching their full potential.

Boredom and discouragement can be positive traits too. They can be a mechanism by which somebody moves on from something that is not fruitful or productive (playing pointless board games, for example). A computer will not get bored or discouraged and so will play the board game to completion/perfection. So what? How is this a sign of intelligence? Sometimes getting bored and moving on is also a sign of intelligence, but because it is based in emotion (frustration, for example), a computer will not do it. So again we have another clear example of the evolutionary function of emotion and how emotion plays a role in intelligence.

The idea that emotions are functional evolutionarily because of a randomizing effect is odd in itself, btw.

Real life is not always about reasoned and well analysed actions, sometimes taking a risk is what is needed, or even changing the rules of the game...

Now because I am getting bored of this repetitive and circular conversation I am going to make an emotionally based (and intelligent decision) to remove myself from this conversation, since I am tired of making (and supporting) my point repeatedly (unlike a computer which would continue to do it ad nauseum, which is rather unintelligent, I would say).

"My religion is not deceiving myself."Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

In the DeepMind breakout example that I posted earlier, the AI begins learning by performing random actions. Eventually, using reinforced learning, it learns which actions lead to results that accomplish its goal of a high score. Indeed, the AI will never get bored and will keep playing until it reaches its goal. Most people would probably get bored or discouraged and quit before reaching their full potential.

Boredom and discouragement can be positive traits too. They can be a mechanism by which somebody moves on from something that is not fruitful or productive (playing pointless board games, for example). A computer will not get bored or discouraged and so will play the board game to completion/perfection. So what? How is this a sign of intelligence?

You seemed to be claiming that the randomizing effect of emotion is functional in problem solving, and that because machines lack emotion they lack this necessary aspect of intelligent problem solving. The DeepMind breakout example clearly dispels this odd notion. We don't even need to venture into theoretical AI.

Sometimes getting bored and moving on is also a sign of intelligence, but because it is based in emotion (frustration, for example), a computer will not do it.

Bordom is an emotion concept that's based in low arousal and unpleasant affect. Machines lack affect because their intelligence is not built on the substrate of a biological organism. We've been though this early on but you don't seem to accept or understand this fundamental difference.

So again we have another clear example of the evolutionary function of emotion and how emotion plays a role in intelligence.

Which is irrelevant to AI because they lack biological bodies.

The idea that emotions are functional evolutionarily because of a randomizing effect is odd in itself, btw.

Real life is not always about reasoned and well analysed actions, sometimes taking a risk is what is needed, or even changing the rules of the game...

In the DeepMind breakout game example, the AI's first actions were utterly random. It was only after evaluating the effects of these random actions that it learned which were effective in accomplishing its goal.

People cheat to gain a selfish advantage. Cooperative behavior can be mutually beneficial to all.