One good point Gary makes in his article is that, while Deep Mind’s achievement is being trumpeted in the media as a triumph of “deep learning”, in fact their approach to Go is based on integrating deep learning with other AI methods (game tree search) — i.e. it’s a hybrid approach, which tightly integrates two different AI algorithms in a context-appropriate way.

OpenCog is a more complicated, richer hybrid approach, which incorporates deep learning along with a lot of other stuff….. While Gary Marcus and I don’t agree on everything we do seem to agree that an integrated approach combining fairly heterogeneous learning algorithms/representations in a common framework is likely to do better than any one golden algorithm….

Almost no one doubts that deep learning is part of the story of human-like cognition (that’s been known since the 1960s actually)…. What Gary and I (among others) doubt is that deep learning is 100% or 80% or 50% of the story… probably according to my guess, it’s more like 15% of the story…

Go is a very tough game but in the end a strictly delimited domain. To handle the everyday human world, which is massively more complex than Go in so many ways, will require a much more sophisticated hybrid architecture. In OpenCog we have such an architecture. How much progress Deep Mind is making toward such an architecture I don’t pretend to know, and their Go playing software — good as it is at what it does — doesn’t really give any hints in this regard.

10 comments

From reading this article, i can see that Deep Mind is very much still being used. I want to know if AGI has been figured out, or at least better understood through “other means”, i’m sure you know what i’m talking about. If not, that is very much unfortunate. I guess part of the problem is understanding what triggers emotions, which, in turn down the chain will influence behavior. I’m sure that one has been a real ball of wires to figure out. I think that IBM Watson has been a useful platform. I dunno. Others would know more. Anyway, you could write back on FB.

Totally agree, Ben, it’s not the individual techniques (reinforcement learning supervised deep learning, monte-carlo tree search etc.) that are making it so successful, it’s way these techniques have combined into a single integrated system!

I’m going to repeat what I posted on ‘Overcoming Bias’ about this, because the system here is basically a 3-level architecture (it’s a hierarchial system that operates at 3 levels of abstraction).

“The recent breakthrough of Google’s ‘GO’ playing AI that has beaten the European Go champion is further evidence for AGI soon.

Although the machine relies on some GO-specfic tricks ( for planning), there are also general-purpose techniques being deployed that would be relevant to AGI.

First, the architecture consisted of a ‘3-level split’ in terms of inference at 3 different levels of abstraction. This confirms to me that just 3 levels of abstraction in inference are all that’s needed for AGI!

3rd level: Planning (long-term strategy). Used a GO-specific trick here (Monte-Carlo Tree Search) to select strategy. Not general-purpose, so it’s the weak link in the architecture (but still very effective for GO playing).

See the 3 levels? It confirms that In general terms, all AGIs will operate with a hierarchial architecture that uses these 3 levels:

Mathematics can be considered the ‘heuristic search’ mechanism of reality; it is the means through which reality explores all branches of ‘possible worlds’.

Physics can be considered a ‘policy network’ which decides what possible worlds should become ‘real’ (i.e have their pathways explored in depth)

Finally, mind can be considered a ‘value network’ ; it is the ‘evaluation function’ of reality, which prunes the tree of possible worlds to the more limited set of ‘real worlds’ – it ‘terminates’ the search function.

Direct match to 3 optimal types of inference for each of the 3 functions!

Aleksandar Dimov says:

I agree with you, but I don’t think that their way of approaching the field is not effective, exactly opposite.
For me it’s obvious that the team of DeepMind is testing some parts of bigger system.
Can be seen from the progress that they made from atari games last year to current AlphaGO. Thous are example of testing components. Anyway the work and the progress for one year is very impressive and important for attracting more capital. It is very likely that they will tackle some commercial problem the next one or two year.

Sean O'Connor says:

I’ve been thinking a bit more about the applications of Google’s DeepMind Go algorithm. I think it fits in very well with computational biochemistry for drug design as well as engineering areas where simulators are used such as analog electronics.
I did an evolutionary algorithm add on for an electronics simulator that could modify component values to optimize a given circuit. Impossible to get anyone in that community to take is seriously though. Pity, because it actually worked well. With a little support it could have been taken further.
With the deep learning algorithm I would say that you could create a system that could design circuits from scratch, rather than just adjusting component values in a given circuit. And almost certainly the designs would be better than a human could achieve.
I would say the only prerequisite is that a simulator is available for the physical system you want to design.