Machines are not rising (yet)!

While movies based on artificial intelligence are flooding the market, Elon Musk and scientists like Stephen Hawking and Stuart Russell are becoming more and more worried about how modern technology is slowly but surely getting the upper hand. Even Russell, the co-author of the remarkable book “Artificial Intelligence: A Modern Approach” asked researchers in his open letter to work on systems which are good and the operations are reliable. Is there a real threat in artificial intelligence?

What is it that we are so afraid of?

People who tend to be scared of artificial intelligence are basically worried about two things:

during a process in the unpredictable swirl of algorithms somehow it’s always the human who ends up being the weakest link. Therefore, the machine coolly decides to eliminate them.

machines may become conscious and go against their makers.

Warning us– or rather his co-workers– Russell finds the first threat feasible. The good news is that with proper consideration this problem can be resolved. But why does it require attention? Most systems labelled as that of artificial intelligence belong to the realm of computer aided learning. Essentially they are not imitating the human way of problem solving but rather carry out undefined and un-programmable tasks we are unable to resolve. Weather forecast may be the most widely known one but there are others too such as nowcasting or classification methods applied in medical diagnosis. Procedures like these may save lives or even determine the fate of entire communities like for instance “predictive policing“, a practice becoming more and more widespread nowadays. Luckily however a research in this field has to meet severe requirements. Codes can be monitored with QA methods used in software development and statistics also provides us with the possibility of evaluating results. Therefore, we can say with confidence that by “keeping an ear to the ground” the first threat, even if not entirely, but can be reduced.

How about the second one? It assumes a general, not task-specific machine that is able to set itself goals. Last year, Google DeepMind learnt to play Atari games fairly well, then this year it beat the European go champion Lee Sedol, who is one of the world’s best players. Now, the same basic techniques that mastered various games are used for analyzing patients’ records at the Moorfields Eye Hospital.

One may wonder… in the famous film Blade Runner some genetically engineered replicants- Nexus 6 models- are trying to avoid their pre-programmed demolition. While attacking their maker they create beautiful poetic images like the monologue Tears in Rain… Is the arrival of such a group imminent?

The unresolved issue of 2000 years

It’s not enough to understand reality in order to experience reality but understanding it should previously be enlightened. Understanding existence is in itself placed on a generally speaking shining horizon. (Heidegger: Basic problems of metaphysics, p 351)

Most AI books include a passage on the limits of artificial intelligence somewhere in the introduction section. Interestingly enough it’s Hubert Dreyfus , an expert in continental, phenomenological and hermeneutical tradition– an entirely different field-, who gets quoted in these works rather than the aces of classical analytical philosophy. The reason for this is that his study “Alchemy and Artificial Intelligence” was published in 1965 and his books “What Computers Can’t Do” and “What Computers Still Can’t Do” also withstood the test of time and brilliantly forecast the limits and pitfalls of artificial intelligence research.

Studying traditional artificial intelligence Dreyfus found it to be based on four assumptions:

the psychological assumption- the mind can be viewed as a device operating on bits of information according to formal rules which can be executed on a discrete information processing unit.

the epistemological assumption- knowledge can be formalized i.e. everything that can be understood by human beings can be expressed by context-dependent formal rules or definitions.

the ontological assumption- the world itself consists of independent facts that can be represented by independent symbols.

It is exactly the program that western philosophy and science commenced two thousand years ago. The traditional AI (or GOFAI, the good old fashioned AI) did believe that making artificial intelligence would help understand natural human intelligence and this is exactly the idea behind the psychological assumption. Being independent of the other three– all making up modern artificial intelligence– however, it was quickly discarded and referred to cognitive science.

Dreyfus acknowledges both the benefits and the constant development of artificial intelligence. He points out however, that by using the premises of AI it took western intellectuals a 2000-year-long struggle to realize that the problem of brain and mind cannot be resolved within the old-fashioned framework.

Dreyfus claims that only a certain part of human intelligence is built according to a method that is easily applicable by science. There are basic problem solving principles and operations applying certain patterns that can be expressed by rules or assumptions. Human experience and intelligence however are also crucial and cannot be ignored; we are the products of our environment at least as much as we are its observers and creators. Just like Quine, Dreyfus is also a holist. In order to recognize the basic elements of either the world or our knowledge a previous comprehensive image of the world itself is needed. Let us take an example:

As the famous gavagai example goes: should we find ourselves with an isolated tribe and want to note down their language what we would do is conduct observations, collect linguistic data and try to “fabricate” the rules of the language from the behavior and reactions of the speakers. If we follow a member of the tribe who suddenly sees a rabbit and cries out “gavagai”, we take notes and try to analyze this behavior. How to translate it into English is another matter.

“Rabbit”, or “there’s a rabbit”, or it may mean “it’s a rabbit over there”, but it may also be “there goes today’s dinner”. With some practical tools we are certainly able to narrow the possible interpretations down. For instance, if the same word “gavagai” is said in the evening when we have a piece of meat on our dinner plate, the options may be restricted but the translation can still be “dinner” or “rabbit”. Quine says the reason why it’s happening is that in order to make correct interpretations we should know the entire language “all together”. We don’t simply learn isolated sentences but their coherence and the related empiric experience as well therefore the sentences of a language are mere abstractions. Their meaning is received from the language as a whole instead of individual sentences constructing the language.

All this is true of intelligence as well. It must be remembered that we humans are “embedded” in the world surrounding us. The world is its best representation the way it is given to us and we are unconsciously using it on a daily basis. We are extending our minds when using a given object- for instance a church tower- as a signpost for direction. However, our mind is very much different from our brain. Our perception, just like our perfection in the world is defined by our physical body since we experience the world through our senses and change it with our body. This thought of Dreyfus is also the precursor of embodied cognition.

Robots!

Before anyone might think this is only philosophy let’s find out more about the Moravec-paradoxon.Moravec and Brooks, the pioneers of modern robotics, became interested in embodied cognition partly as a result of Dreyfus’s influence. They were attempting to break down the boundaries of the traditional approach by providing their intelligent systems with a body. During the program they discovered the following paradox: high-level, symbolic processing requires very little computation, low-level, sensorimotor processing however requires incredibly enormous amount of computational resources. What is more, symbolic processing is built on lower levels.

Let us suppose we are able to make a robot that is able carry out embodied cognitive processes. Let us assume that it has consciousness whatever that may be. This means that in its build it must be very similar to a human. So similar maybe that the Voight-Kampff test conducted in Blade Runner would be needed in order to decide whether it’s a human being we are interacting with or an android.

Significant steps have been taken in order to make more profound discoveries on the field of artificial intelligence. The project of Google DeepMind is now learning the general concepts. Dreyfus warns us that all this covers only a small part of the actual operations of the mind. To recognize unique elements a comprehensive approach is needed since various concepts are acquired together with their relations to each other. These relations are perceived when embodied in a world; without a body and a surrounding environment only partial success is available.

Visualizing Star Wars Movie Scripts

A long time ago, in a galaxy far, far away data analysts were talking about the upcoming new Star Wars movie. One of them has never seen any episode of the two trilogies before, so they decided to make the movie more accessible to this poor fellow.

Hungarian sentiment dictionaries

You can find our Hungarian sentiment dictionaries on opendata.hu If you'd like to know more about these resources, check out our blog post.

Hungarian Teachers’ Protests: What happened on Facebook?

In the beginning of 2016 Hungarian teachers, students and supporters organized a mass movement to express their strong disagreement over the government's education policy. In Budapest and in other big cities notable demonstrations took place, one on 13 February and the other on 15 March. During these months many posts and comments were published and many opinions were expressed on Facebook. With the help of our Facebook scraper we scraped the posts, comments, likes, reactions and active persons of the two events. After that we created a Shiny dashboard working on this data, aiming to understand the community's Facebook activity and discourse through visualizations.

Anti-roma topics on kuruc.info

From the mid 2000's the number of anti-roma and racist utterances have been increasing in Hungary and this manner of speech has become accepted in the common discourse. Our research focused on the anti-roma topics in the far-right media over this period. Here you can discover the distribution of various topics over time along with the most important events that influenced each one of them.

Facebook scraper

You can find our little Python-based Facebook scraper on github.

Search for:

Precognox

This site is dedicated to our R&D activities, to see what we are doing for profit, check out our customers' page.