Deep learning, which is today known to be the re-emergence of Artificial Neural networks, has recently succeeded as a major approach towards artificial intelligence. In many known fields, that include computational linguistics as well as deep approaches, it has now largely displaced all the earlier machine learning approaches, due to the superior performance that is provided by them.

In this particular video, where a lecture is given by Christopher Manning, Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and Computer Science, He talks and gives a little insight to the audience about the recent successes and all the future aspects that are related to Deep Learning, that is indeed nothing but simply the re-emergence of the method of Neural Networks for the purpose of language understanding as well as cognition.

The lecture commences as: "In the last few years, machine learning in general and in particular Deep Learning systems have succeeded in the achievement of enormous success in sensory perception tasks. We have successfully taught our computers to see an image and understand the all the objects that it contains. We have also taught our machines to hear our voice and to recognize the words that we have said. That is how today the speech recognition on our cell phones is now incredibly good at hearing and recognizing the words that have been said.

However, The Recognition of the words is not the same as understanding them. Very Commonly today on our cell phone the words that have been said by us are easily recognized, but all that we still didn't get beyond that is: "Would you like me to do a web search for those words?" And moreover, above it, all intelligence is not only about the perception of various tasks. It is also about the understanding and reasoning over human knowledge. Today, if we see, most of the humankind's knowledge or information is available in written forms as books or increasingly in the Digital form as well over the internet.

So the real question that is central and that lies in Artificial intelligence is that- How can we teach computers to comprehend human language so not only can our computers can interact with us better and efficiently but also so that the knowledge of the world that we humans have accumulated so far is accessible to them and so that they can interpret and reason intelligently as we humans can. The process of trying to understand human languages has had a very long history behind. So many centuries ago, it was Aristotle who did some of the foundational work in that direction and in the same context he sought to systematize things such as the parts of speech of language, nouns, and verbs and alongside he also worked on doing inference and so he's most famous for the notion of Syllogisms. And It took a very long time for the understanding of language to move much beyond this point.

The central person who brought things further forward in the same context in the middle of the last century was Noam Chomsky who was the first one to realize that there was much more to understand about the structure of human language and how they learnt and how they conveyed meanings, then what was known about in traditional philology or linguistics. So Noam Chomsky was the person who really redirected linguistics towards being a science in the 1950's and the 1960's.

He in fact also argued that grammar are a well defined mathematical concept that could be studied. He not only made huge contributions to linguistics but also to computer science. So, the area of formal language theory which underlies our modern languages largely follows Chomsky's ideas." Christopher Manning continues to go on.

From here, the lecture takes a transition where the speaker talks about being a direct descendant to Chomsky and further talks about reciting a story so that we can easily see the movement over the generations from the humanities towards the sciences and engineering. He also goes on to talk about people like Herman Brecht and even more.

In this public, worth listening to a lecture of almost 55 minutes, that is given, they further discuss some of the results in computer vision, speech, and language which support all the preceding claims.