But while this work is impressive, it highlights one of the significant limitations of deep learning. Compared with humans, machines using this technology take a huge amount of time to learn. What is it about human learning that allows us to perform so well with relatively little experience?
...
By contrast, the game is hard for machines: many standard deep-learning algorithms couldn’t solve it at all, because there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing.

The best machine performer was a curiosity-based reinforcement-learning algorithm that took some four million keyboard actions to finish the game. That’s equivalent to about 37 hours of continuous play.

Click to expand...

It's not surprising that these brute force machine learning or "deep" learning systems have problems when there is little to signal a 'good' path from a 'bad' path. There is very little intelligence in current AI that people didn't already program into it.

It's not surprising that these brute force machine learning or "deep" learning systems have problems when there is little to signal a 'good' path from a 'bad' path. There is very little intelligence in current AI that people didn't already program into it.

Leisure Suit Larry in the Land of the Lounge Lizards

Click to expand...

I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?

And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?

It's not surprising that these brute force machine learning or "deep" learning systems have problems when there is little to signal a 'good' path from a 'bad' path. There is very little intelligence in current AI that people didn't already program into it.

Leisure Suit Larry in the Land of the Lounge Lizards

Click to expand...

A cosmic shift will occur when someone invents the digital equivalent of dopamine.

I never really understood how AI works. Does it simply look at all the possibilities of an action in some advanced search, weigh them, then pick from the top of the list? Or is there something far more advanced going on?

And how is the data feed in? We hear about AI computers "reading" medical journals. Is it actually understanding the text in the files? Or is that data simply converted to some kind of database then loaded in to the AI computer?

Click to expand...

I wish I understood how exactly how AI works too.

With 'Deep learning' there is really no learning or understanding in the way we think of in a classical AI human mimic brain. It works because we now have massive computing power able to hash huge databases generated by the machines when information is loaded into them. For deep learning the machine creates the database from the information input, we don't generate it for them.http://karpathy.github.io/2016/05/31/rl/

Now back to RL. Whenever there is a disconnect between how magical something seems and how simple it is under the hood I get all antsy and really want to write a blog post. In this case I’ve seen many people who can’t believe that we can automatically learn to play most ATARI games at human level, with one algorithm, from pixels, and from scratch - and it is amazing, and I’ve been there myself! But at the core the approach we use is also really quite profoundly dumb (though I understand it’s easy to make such claims in retrospect).

Click to expand...

A 'learned' machine doesn't understand the physical world difference between these two things but it can recognize them from database created from their images.

After being fed millions of pictures, the image recognition software created by Google enabled artificial neural network of computers to see shapes in images, creating strange, fantastic and psychedelic images that at times could be likened to impressionist art.

With 'Deep learning' there is really no learning or understanding in the way we think of in a classical AI human mimic brain. It works because we now have massive computing power able to hash huge databases generated by the machines when information is loaded into them. For deep learning the machine creates the database from the information input, we don't generate it for them.http://karpathy.github.io/2016/05/31/rl/

A 'learned' machine doesn't understand the physical world difference between these two things but it can recognize them from database created from their images.

Click to expand...

So basically exactly as I said? No real "thinking" going on? Just a very sophisticated search and probability algorithm?

It is my understanding , this is the way AI chess games work. They "simply" run through all of the possible moves and calculate the outcome. I am by no means a good chess player but it is my understanding, this is pretty much what the human players do. The computer works so much faster.

The photo above illustrates very well how amazing the human brain works. We really aren't doing any kind of search of images. Or at least I don't think so. For some reason, we can easily tell the difference between a puppy and a muffin even when most of the data is hidden from us.

And there are things that Aplysia can do that Google could only dream of.

Click to expand...

Yes, joeyd999, you are quite right. In fact, Eric Kandel received a Nobel prize for telling us about what they can do and how they can do it. But, the key is not the neurotransmiter (any of them) so much as all those neurons.

So basically exactly as I said? No real "thinking" going on? Just a very sophisticated search and probability algorithm?

Click to expand...

There is something happening but it's unrelated to 'thinking'.

Because the massive amount of data for learning generates a totally abstract representation of the original data some of the current AI methods can be easily fooled if you understand how they work by making a completely different input data produce a pattern similar to the learned computer response. No human would think these images are what the computer 'thinks' they are.http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf

One interesting implication of the fact that DNNs are easily fooled is that such false positives could be exploited wherever DNNs are deployed for recognizing images or other types of data. For example, one can imagine a security camera that relies on face or voice recognition being compromised. Swapping white-noise for a face, fingerprints, or a voice might be especially pernicious since other humans nearby might not recognize that someone is attempting to compromise the system.

the key is not the neurotransmiter (any of them) so much as all those neurons...

Click to expand...

The key to learning is the neurotransmitter, IMHO. It is the reward our brain seeks for successful execution of actions that achieve a goal. It is the reason we repeat learned behavior -- to again experience the reward.

The key to learning is the neurotransmitter, IMHO. It is the reward our brain seeks for successful execution of actions that achieve a goal. It is the reason we repeat learned behavior -- to again experience the reward.

The key to learning is the neurotransmitter, IMHO. It is the reward our brain seeks for successful execution of actions that achieve a goal. It is the reason we repeat learned behavior -- to again experience the reward.

Click to expand...

No. the transmitters activate or modulate the activity of the neuron and it is much more complicated than that. The same neurotransmitters can act very differently in different species. In fact even in the same species during development. Some neurotransmitters are found all over the CNS and PNS doing their thing on neurons that are doing very different things in the system.

You always hear things like you are saying that are associated with drug abuse and reward centers and the like, but it is a huge oversimplification.

But look, I don't want to start an argument about this, it is very complicated, not completely understood and it is more than difficult to discuss in short posts. My "opinion" is based on being a neuroscientist for more than 30 years. I don't claim to know everything about neurons, learning and memory, but I do know some things about them.