Finally I started to translate my old articles, including my teenage Theory of Mind and Universe - all were milestones of my AGI research. I wrote this particular one as a 19-year old freshman in Computer Science at Plovdiv University. Being kind of too serious back then, this is my favourite picture from the time... :-P

Tosh in 2004

Natural Language Processing, NLP.

An example of a search through the linguistic knowledge base of the author.

Many different meanings of "meaning" defined for different uses.

Quasi-formal semantic analysis - from words to chunks (expressions) to clauses to complex sentences. A search for relations, links.

Criticism of the usage of short ambiguous sentences as a way to explain the "impossibility" of creation of an AIand machine translation; the lacking context is filled by human imagination and could be filled with machine's imagination. A discussion about the artificial prunning of the set of interpretations that humans are doing when translating or interpreting, and the implicit denial of expecting unknown meanings.

Thought experiment of how a 3-year old toddler interprets unknown sentence (tme flies) and how does it searches for a meaning and maps meanings to his senses. Told as a real story, what he knows, what he experiences and what he would experience if...

The style in not really academic, there are some dialogues, discussions with a virtual opponent, when this is appropriate to display human biases.

Others...

Continues with long comments with additions about reinforcement learning and other topics (to be translated and linked)

„What is possible to be done with” is yet another meaning of the concept of “meaning”.

Happy reading and enjoy the story of the little Johny and the flies that are flying around the watch... ;)

BEGIN...

This article starts with the definition of “meaning” of a friend of mine, Ilian Georgiev, we shared thoughts an year ago.

Meaning/Sense (Ilian Georgiev's definition):

Meaning of a sentence (a thought) is a function that searches for controversy between the knowledge base of the evaluator (the one who thinks) and the thought being analyzed. All elements from the knowledge base and their connections are juxtaposed with the new thought, the one that is analyzed. If any of the elements of the thought has a connection with another element, and the connection cannot be found in the knowledge base, then the sentence is classified as a non-sense (has an error).

Using this definition, let's search for the meaning of a weird sentence, that I made up don't know how. I'll use also a made-up semi-formal syntax of NLP analysis, that you'll grasp on-the-fly.

(The original paper is in Bulgarian, and the author is not a native English speaker; it is possible that there are some mistakes in some of the senses used.)

Let's analyze all the clauses, this would give clues for the meaning of earlier clauses and the whole sentence.

„The cat drank the stone”

Is "the cat" linked to the verb “to drink”?- YES.

What are the links?

- Usually “to drink” is linked with objects, which are linked to liquidity or semi-liquidity of a substance. In general, in the definition of an object, that is linked to the verb “to drink”, usually there's a morpheme or semantics of liquidity.

The examples are checked easier if they are put down in a unified way: Verb + Object/Noun for quick comparisons. I don't know whether to check the general concepts/meanings first (like liquidity), or after all specific cases are checked first (as colocations, like “drink vodka”).

„Drink” is used in some other cases, where the object is not a liquid.1. Drink some poison.2. Take a pill. (Take == Drink)3. Take a medecine. (Take == Drink)

In this cases the object is:1. On the surface of a liquid.2. Floats in a liquid.3. Dropped in a liquid.4. Sunk in a liquid.5. Absorbed in a liquid. The most common meaning of “to drink” is linked with an AGENT which is a living being. Living beings have a throat, where the drunk objectpasses. The liquid assists the object to pass through the throat, when the object is not a liquid itself.I recall an idiom (Bulgarian, this is a literal translation)"A duck has drunk his sense."

So, what about the linkage between “to drink” and the object “stone”?

Stone has direct links to verbs as "to throw” and “to crack” and similar, “to kick” and others.Direct links are examples of usage which I have ever encountered in texts or speeches.

„Stone” can play different roles: AGENT (subject) or an object, that clarifies/specifies an ACTION, done by another AGENT.

Is it possible to drink a stone?

One can assume also, that if something is said, then this is a special kind of stone.If a meaning should be found in any price, it is possible for the searcher to invent meaning that matches the given sentence. It is possible also to add additional sense, using experience, so that the sentence that is “meaningless” up to now to get its explanation. This point will be discussed again later.

However, this doesn't make the sentence meaningless, yet, because it was found above, that “drink” could be linked to objects which are not liquid, and in this sense, “to drink something” is a reference to “to swallow something” (to pass it through the throat)

"Jump” is an action where the AGENT reaches to a state, where its body doesn't touch the ground.

It is the same for “Fly”. Therefore, “to jump” partially covers the meaning of “to fly”.Besides, I know examples of “flying” where to fly is used with the sens of “to jump” - directly or implicitly suggested by the context.

“Air Jordan” (in Bulgarian - "Въздушният Майкъл Джордан.")

The basketball plyyer Michael Jordan jumps and stays in the air for long enough to impress people more than the typical jumpers, this has caused his jumps to be linked with the morpheme “air”, which is used in words for flying (airplane, airforce).

Therefore “(the cat) flew below the uphill” can be interpreted as:

(the cat) jumped below the uphill.

Now, is it possible “jump” to be used with “below”. Can you jump below?

To..

- Jump over s.t.- Jump into- Jump out

...

No “jump below”, but it doesn't mean that this expression is meaningless. (Actually there is jump below something, but say, not jump below the uphill)

„Jump” has other meanings, like: doing something faster than usual, or moving fast.

Uphill is an object, it can be located in а mountain, but we can imagine it to be any other object, over which somebody can move “up”, walking on. This “uphill” object can be made of wood or metal and can have a hollow inside, where a cat can hide.

The cat jumped under the thing, that had an uphill over itself...

The whole sentence can turn to:

"Котката изпи камъка и литна под нанагорнището."

"The uphill had a hollow inside.The cat swallowed the stone and jumped below it.

The additional clause fills up the uncertainty in the scenario.

Another interpretation, based on “to fall” could turn:

"...flew below the uphill" to “..felt below the uphill".

Also, “below the uphill” may mean below the part of the uphill, that is steep, i.e. just before the uphill starts to climb.

Then: "The cat swallowed the stone and felt below the uphill."

This gives a rise of another interpretation – stone is often linked with “heavy”. There is a proverb (Bulgarian) “Hang a stone on my neck”.

Then: "The cat was climbing the uphill, but it swallowed the heavy stone - it threw it down below the uphill...”

Or:

While the cat was climbing the uphill, weird little balls felt down from the sky. They seemed like meet balls and smelled the same way. The poor cat was tired of hunger and the hard walk, and she bit one of the sky meet balls. A moment after she was frightened – the meet ball appeared to be as heavy as a stone. The cat was rolling down, until she stopped on the flat land under the uphill.

When searching for a meaning in very short pieces of information, such as single sentences, it is expected for the mind to invent, to imagine in order to fill up what is unknown with probable sets of circumstances. If the source doesn't deny, we can invent any plausible imagined circumstances.

Short sentences as a way to deny creation of AI

Actually, one very rarely meets single sentences in the reality, out of context to constraint and direct translation or interpretation of the meaning. However, short “nonsenses” – which one cannot interpret unambiguously or are often used to disprove the possibility of creation of an AI. Let's check out a classic from the NLP.

Time flies.

"It's so hard to translate to another language!”

How would we translate it in Bulgarian? "Времето лети" (Vremeto leti -The time is flying) or „Времеви мухи" (Vremevi muhi - Flies which are related to time)

Or another way - "time"doesn't mean only “time”, and “flies” doesn't mean only the little flying bug. These are just the first two items that came up to my mind!The search was obviously had been pruned up to two items, two possible interpretations.

This kind of unconscious pruning, limitation of the number of variants, will be discussed below.

Virtual Colleague: “The time is flying” is the correct translation. There's neither such an expression as “flies, related to time”, nor any other.

Author: Why do you think so?

V.Colleague:The other translations doesn't make any sense. Me myself, I would translate it that way, I think I'm good enough in English. Check out my personal web page.

Author: And why would you translate it that way?

Colleague: Because... I haven't heard of an expression meaning “flies, related to time”...

Author: Therefore you have excluded the possibility to hear a new sentence, where this expression is used in a meaning that was unknown for you before?

Colleague:Well, I think so...

Author: Who did tell you that sentence?

Colleague: I'm not quite sure... You? But... Well... It is possible that this is an idiom. Can you explain it to me? Maybe this is a special kind of flies? Or more likely... (What a SF fan I am not to guess this one!) Flies through time! Time travel!

The ambiguity is caused by the lack of a criterion for pruning. Until the moment when an action is executed – an action caused by the input data, which are said to be ambiguous – the ambiguity is not an issue. The system can remember the whole sentence, word-by-word and until the moment of action, a decisive single action, it is known that all interpretations are possible.

And when the action should be done, e.g. a robot to capture the right cube or the middle cylinder – then the system should use an additional feature in order to disambiguate, to choose. However, since the input is not decisive, but ambiguous, then turning any of the interpretations to action is not a “mistake”, regarding the input.

Perhaps the Natural Language, or as the author calls it – The Language of Mind – allows ambiguity, because there are many “correct” possibilities. There are many cases, where each of the possible solutions/interpretations is “right” in the sense that the device (human) who took the decision continues to function after executing in effect, for real, an action, caused by the given interpretation.

So, if a given system continues to function – according to a given definition of “functions”, e.g. its heart continues to work at least for so-and-so long period after executing the given action – then this action was “right”, i.e. this action is assumed to have had followed laws that don't lead to malfunction.

More complex control units have larger space of correct decisions, they have wider “freedom”, i.e. possibilities for future actions, after which they will continue to function right.

One can always find meaning/sense, i.e. a connection between items. The meaning is the connection between things. (The relations between things)

The easiest thing to do is to redraw an already known, drawn line, and this is what is done initially when one is doing a semantic check – whether precomputed links/connections/relations with the given expression do exist. If such links do exist, they are saw like “gray lines”, which mind can darken, one can do it when given a piece of paper with gray lines and is being told to draw lines without thinking a lot or planning. This is what the virtual colleague did above, he rejected the possibility that “time flies” has meanings, that are yet unknown to him, and need to be computed, “drawn” in his memory.

Let's overview a case with the same sample expression in another case.

Time Flies...

A three year-old little native English speaker – Johny. He knows, that “a fly” means the flying bug (something little, black, that is flying and when it land on your face it's !!! гъделичка and you're trying to let it go by waving your hands.

Johny knows how to create a multiple of fly – flies, but he doesn't know that “a fly” means also “a fly of an airplane”. For Johny, “time” means just “a watch”. Johny knows, that “a clock” and “a watch” have similar meanings – something circular, with a long things, that are rotating... and the longer things are rotating faster than the shorter and the thicker; the thicker ones sometimes appear not to move at all, but after you have played for a while with your toy cars and look to them – they seemed to be at another place...

All the times when Johny has heard talks about time, he has seen clocks or watches.

Johny has heard his father saying “I don't have time, we have to hurry up!” and when his father has told that, he has looked to his watch.

That way, the conception of “time” is linked to the image of “watch”, when hearing time, he sees a watch, no abstract concepts. Johny himself doesn't have a watch.

Now let's assume that we put on our hand a big and shiny colourful watch and go to play with Johny on the playground. What he is going to do, if we tell him “Time flies!” and he hears this for the first time in his life?

Colleague:Perhaps he will look to our shiny watch and will search for flies around it...

Author: Exactly! Can you imagine what he would do if we didn't have a watch on our wrist?

Colleague:Maybe he would look to our hand, searching for a watch and flies... If he has remembered the pattern of watches being on the left wrist, he may first check there, or he may check both...

The images Johny has for “time” and “flies” are recalled, and Johny searches the expression of these images in the environment, accessible by his senses. The specific mean is not important.

The machine needs an external environment, where to search for meaning and senses – MATCHES of images, names, features, coincidences, patterns.

Author: What Johny is going to do, after realizing that there are no flies around the watch, or even there is not a watch?

Colleague:It depends what behavioral models have been developed so far. He could remember the expression "time flies" as an image that represent what he has thought then - “flies which are flying around a watch”, but not to do anything further; Johny could wait, expect to face usage of this expression in an environment, which is richer of details and features, so that he would be able to extract or approve the meaning.

Details, specific cases are what limits the space of search, the domain. Details are forces for pruning...

Author: Johny may also not make any conclusion, but just taking the expression as a non-sense so he wouldn't remember it. Also, he can ask us immediately:

- What does “time flies” mean?

The machine should also be able to do like that, as we do, it will need teachers and supervisors, while it develops.

Our explanations and the degree of trust he has to what we explain to him will determine how the child is going to limit the space of search, but also how he will expand the space, by adding possibilities which he didn't thought of before.

If one explains to Johny, that “time flies” means “time is never enough”, the child may remember this explanation as a whole sentence, without interpretation. Just a reference, a link: “time flies” redirects to “time is never enough” and then he would search for a meaning for the new sentence.

On the other hand, one can also explain to Johny, that “flies” means also “to fly”, to move like birds or like Superman or so (recall that he didn't know the verb; it's strange not to know it, but that's the assumption), but not explaining him about the abstract concept of time.

In this case, Johny could keep linking “time” to “watch” and may start to imagine “time flies” as “the watch flies”. He may look around, searching for a watch that is flying – generally, this is a search of features, input data/senses which could confirm the link that was made. Or... just anytime he hears “time flies” Johny would imagine a flying watch and would ask himself “Whether the flying watches have wings or they are magically flying?”

Imagination is a the point here. When searching for a meaning, we should be able to imagine, to fantasize. That means, one kind of inputs/sensesto cause other kinds of inputs/senses. The primary may be “real”, taken from raw data from the reality, linked to what the machine or human takes for “Reality”; while the secondary input/sense could be imagined, fantasized, unreal.Talking about reality, humans usually take for “real” input channels such as vision, hearing, touch, taste, smell; when he is receiving data at the maximum possible rate (max resolution, raw data).

Vision is a primary sensory input, when we're sensing images, where we can recognize individual pixels. Letters, numbers and any symbols come from a secondary sensory input, because from the primary sensory input, containing raw pixels, each one containing an independent value, are extracted data with a smaller size (in raw bits) – letters, digits, geometric shapes etc.

...

In order a system to find a meaning and make sense of things, it is very useful the system to have at least two different kinds of sensory inputs, and each of them to be able to invoke, to link to the other one. The images and relations between images (e.g. motion); sounds and relations between them can “generate” words: interpretations, which are described with smaller quantity of bits.When Johny hears “a watch”, he can imagine, somehow to see the image of the watch and what is possible to be done with it.

„What is possible to be done with” is yet another meaning of the concept of “meaning”. There is no sense in meaning, if you can't do anything with it.

Actually, everything “makes sense”, or “has a meaning”, in the sense that it causes something to be done. It is so, because even the so called “non-procedural knowledge/data” are only “non-procedural”, non-active, in the sense that they are not causing a type of action that is formally defined as “procedural”. In a computer, an information processing system, any data cause actions. Data defines what happens in the machine's “mind”. For example, the descriptive data of a page of text determines what exactly the machine is supposed to do, while it's CPU is reading the data from the memory, processes it, displays it, prints the page.

The meaning is the action, that the “thing” which is evaluated invokes/causes/turns, and this meaning can be different, depending how deep we are searching and more – how and what.

How and What wedo the search is more dependent on our past experience, than on the short input being evaluated.

Finally... Let's assume that the meaning of a message/sentence is an action to be taken, assuming that it was caused by the meaning that was found (if another meaning was found, another action would be taken).

Higher the complexity of the system, higher the weight of memories/experience in the decision how and what to search, therefore – what could be found. It's a paradox, but:

More complex the system – More Meaningless the Meaning

Smarter the system – Meaningless the meaning

Because the system - the Artificial Intelligence or human - can more freely search and find meaning – links between the items, things, phenomenons, events, messages, memories, objects, images, sounds or whatever.

Continues with long comments after the article, about reinforcement learning and other topics... (to be published, when translated)

Friday, January 1, 2010

Todor is 25, born in the city of Plovdiv, Bulgaria. MS in Software Engineering and BS in Computer Science from the University of Plovdiv (best in class); was an intern in RIILP, Wolverhampton, UK where he studied Natural Language Processing. Todor started to play with computers as a boy, his first experiments with computer graphics and digital signal processing were as early as late 90-ies on his Pravetz-8M (Apple][e clone); he has developed a communication system for disk transfer between Pravetz-8M and a PC, based on sound frequency modulation-demodulation. Todor is an author of a Speech Synthesizer (“Glas”) and a context-sensitive English-Bulgarian dictionary “Smarty”, participated in LREC 2008 and IMCSIT 2008 conferences. He was also a software developer, and a verification engineer in a semiconductor start-up.

Todor's biggest scientific thrill, though, is Artificial General Intelligence, and at the moment he's an Independent Researcher, aiming at founding a private research company. He's also an artist,a writer and an independent filmmaker and is searching for ways to fund his research by doing show business. In the “Researchers' Night” in Sofia's Technical University, he presented ideas from his Theory of Intelligence, that he has created as a teenager.

Todor Arnaudov: I will create a thinking machine that will self-improveDreamers and adventurers make the great discoveries. The scepticists' job is to deny their visions, and eventually not to believe their eyes.

- Artificial Intelligence, or AI, is a wide field.Would you explain to the readers for example what is the difference between “Weak AI” and “Strong AI”?

AI is a science about systems that solve complex problems, which are assumed to require human intelligence. Weak AI solves specific tasks such as image and speech recognition, machine translation, self-driving cars. Strong AI is much more ambitious and it's aim is answering the general question – What is Intelligence? - and how to create universal systems, capable to reach and overpass humans in all cognitive aspects. Strong AI is called also Artificial General Intelligence or Universal AI.

- Did a particular event pushed you start to deal with the concept of Thinking Machine?

Yes, the movie “Terminator 2”, when I was 7. The concept of thinking machines excited me. As a teenager I had an inspiration – I wrote some SF and philosophical prose about AI and developed my own general philosophy and theory of the principles of Mind (intelligence) and the Universe. Yes, it was weird... I realized, that AI is a Universal Scienceand strategically the most important task, because solving it would be an accelerator of any possible research.

- Do you have colleagues in Bulgaria in this field?

Maybe yes, maybe no... Boicho Kokinov and Moris Grinberg are doing Cognitive Science in NBU, Sofia, they work on the cognitive architecture DUAL. A research laboratory called “Sphere” is doing sort of intelligence research, but it's quite abstract what I've read. During my presentation in the Researcher's Night in Technical University of Sofia, I met Yordan Yankov from the Center or for Research of Global Systems; Yordan is working on his theory of intelligence and he mentioned about a special logic system, something related to Quantum Logic and Hegel's dialectic, if I'm not mistaken. Maybe you know about “Kibertron” - an intelligent humanoid robot project. They claim that they have a model of “natural intelligence”, but they require 5 million euros in order to implement it.

- Where does the researchers' efforts should be focused in order to achieve Artificial General Intelligence (AGI)?

First of all, research should be lead by interdisciplinary scientists, who are seeing the big picture. You need to have a grasp of Cognitive Science, Neuroscience, Mathematics, Computer Science, Philosophy etc.Also, creation of an AGI is not just a scientific task, this is an enormous engineering enterprise – from the beginning you should think of the global architecture and for universal methods at low-level which would lead to accumulation of intelligence during the operation of the system. Neuroscience gives us some clues, neocortex is “the star” in this field. For example, it's known that the neurons are arranged in sort of unified modules – cortical columns. They are built by 6 layers of neurons, different layers have some specific types of neurons. All the neurons in one column are tightly connected vertically, between layers, and are processing a piece of sensory information together, as a whole. All types of sensory information – visual, auditory, touch etc. is processed by the interaction between unified modules, which are often called “the building blocks of intelligence”.

- If you believe that it's possible for us to build an AGI, why we didn't manage to do it yet? What are the obstacles?

I believe that the biggest obstacle today is time.There are different forecasts, 10-20-50 years to enhance and specify current theoretical models before they actually run, or before computers get fast and powerful enough.I am an optimist that we can go there in less than 10 years, at least to basic models, and I'm sure that once we understand how to make it, the available computing power would be enough. One of the big obstacles in the past maybe was the research direction – top-down instead of bottom-up, but this was inevitable due to the limited computing power. For example, Natural Language Processing is about language modeling; language is a reduced end result of so many different and complex cognitive processes. NLP is starting from the reduced end result, and is aiming to get back to the cognitive processes. However, the text, the output of language, does not contain all the information that the thought that created the text contains.

On the other hand, many Strong AI researchers now are sharing the position that a “Seed AI” should be designed, that is a system that processes the most basic sensory inputs – vision, audition etc. Seed AI is supposed to build and rebuild ever more complex internal representations, models of the world (actually, models of its perceptions, feelings and its own desires and needs). Eventually, these models should evolve to models of its own language, or models of human's natural language. Another shared principle is that intelligence is the ability to predict future perceptions, based on the experience (you have probably heard of Bayesian Inference and Hidden Markov Models), and that intelligence development is improvement of the scope and precision of its predictions.

Also, in order the effect of evolution and self-improvement to be created, and to avoid intractable combinatorial explosion, the predictions should be hierarchical. The predictions in an upper level are based on sequences of predictions (models) from the lower level. Similar structure is seen in living organisms – atoms, molecules, cellular organelles, cells, tissues, organs, systems, organism. The evolution and intelligence are testing which elements are working (predicting) correctly. Elements that appeared to work/to predict are fixed, they are kept in the genotype/memory, and are then used as building blocks of more complex models at a higher level of the hierarchy.

- What exactly is done in the field? Globally, in Bulgaria?

Yet a few researchers and organizations are so confident to put officially that AGI is their goal, but the number is progressively increasing. Jeff Hawkins is probably the most popular guy in the field, he's author of the famous book “On Intelligence”, explaining his theory of intelligence. Jeff is a founder of a neuroscience institute, focused on the neocortex, and his company Numenta is working on a new computer architecture, inspired by the neocortex – hierarchical temporal memory, implementing so called memory-prediction framework. Another important figure in AGI is Ben Goertzel - an author of a numerous books about intelligence. Ben is trying to build an AGI in his company Novamente and plans to use virtual worlds of massive multiplayer games to teach it. Boris Kazachenko investigates intelligence as a universal algorithm for generalization and cognition as a part of the meta-evolution of Universe, he's developing a theory of intelligence. If you want to join the AGI research community, you should consider also the work of Jurgen Schmidhuber, Markus Hutter, Tomaso Poggio, Hugo de Garis. TheSingularityInstitute organizes a world conference each year about the so called “Technological Singularity”, including the advent of Universal artificial intelligence and its effect on humanity in the future.

I can't tell what my colleagues in Bulgaria are doing in the field; me myself, right now I'm warming up – clarifying my own ideas from the past and studying the theories of the others. Afterward, I'll continue with improving and specifying my theory of intelligence. I plan to start to do experiments with simple seed AI. My ambition is to found a research company, like Hawkins and Goertzel, but I don't have partners and capital yet – I'm searching for them.

- What would these experiments look like?

I will create intelligent agents and will watch their development in virtual worlds. Such an agent would have a “brain”, where I'll implement ideas from mine and the others' theories, as well as part of human brain architecture - cortexand old brain.The cortex has several main types of “zones”, functional units – sensory, motor (linked with “will”) and associative (connections/dependencies between different zones). The old brain is responsible for the emotions and the feeling of satisfaction/dissatisfaction of the basic instincts and needs. The agent would have sensors and feelings - vision, hearing, touch, hurt, hunger, pleasure and others, and a virtual body, which will allow it to interact with the virtual reality, to feed itself, to avoid troubles etc. Just after its “birth”, the agent would be controlled entirely by the old brain and would act mostly chaotically, driven only by the basic instincts, such as: pulling out of hot or cold places, attraction to the smell of food. The cortex will constantly watch and record the agent sensory inputs and motor commands and will search for patterns that link them. The cortex' goal is to find the patterns of better satisfaction of its basic needs. If the simple experiment are successful, I will make the virtual worlds and the virtual body more dynamic and will fill them with a higher variety of stimuli and patterns. That is supposed to lead to emergence of a more complex behavior. Eventually the virtual world is supposed to turn to real inputs – from camera, microphones etc.

- Many people believe, that an AI should know everything in order to convince them that its intelligent. However, raw knowledge is not the most important aspect, isn't it? How do you think the Artificial General intelligence machine would look like, also the Ultimate AI?

The most important capabilities of the artificial general intelligent machine are in the self-improvement, learning and universality. A system, that interacts with people and its environment, and like a baby develops from a helpless state to a mental level of, say, 2-year old toddler is much closer to my vision of a Thinking Machine, than current robots and specialized, narrow AI tools such as speech recognition, image recognition, search engines etc.

The Ultimate AGI is capable to self-improve even the most basic algorithms of itself and is ever reorganizing itself in order to work better and better, reaching to the ultimate limits. In humans, there's a similar mechanism, called neuroplasticity, which however is declining after the very early years.

- The ethical issues about creation of intelligent machines are a lot. Don't you think we would need to separate machines as “good” or “bad” in the future?

I believe the thinking machines would be the most similar creations we have ever met, because theintelligence is our most special quality, not our bodies. It is true, that machines could do evil things, like in the movies, if they go out of control or fall in the hands of “bad guys”. Unfortunately this is true for all big inventions. Robots would create new and complex cases for the lawyers, as well.

- What would you tell to all the scepticists, who deny that AGI can be ever created?

I wish them good health. Dreamers and adventurers make the great discoveries. The job of the scepticists is to deny, but afterwards not to believe their eyes.

About me

Author of the world's first interdisciplinary university course in AGI, presented in 2010 and 2011. Artificial General Intelligence (AGI) researcher & developer; contributor to the CogAlg project. A renaissance person with diverse fields of interests and activities. Filmmaker (...)
Looking for R&D and creative partners and opportunities! Contact me on twenkid at google com