For more information check out our additional resources

Friday, April 10, 2009

Artificial General Intelligence

In a nutshell, artificial general intelligence strays from the field of artificial intelligence in that people in the field believe that in order for something to be intelligent, it needs to have some specific set of values. Unfortunately not many people agree on what those values are.

One thing that was made very clear by Dr. Pei Wang is that though it is a small field there are many differing opinions. To be specific, Dr. Wang divides the fields of thought into 5 different categories;

People who believe that something intelligent must look and act like a human being

People who believe that something intelligent must act human

People who believe that something intelligent must be able to solve logical problems

People who believe that something intelligent must have cognitive faculties

People who believe that something intelligent must obey rational norms

Furthermore, even the people who do agree on the definition of intelligence have different ways of going about achieving their goals.

Connecting existing artificial intelligence techniques together

To combine modules based on other techniques into an overall architecture

To extend or augment a core technique into a single system

This may seem confusing, and that's because it is. Dr. Wang went on to give many examples of each of these, but I will just talk about Dr. Wang's research. His project is called NARS, or Non-Axiomatic Reasoning System. The basic premise is that it is a reasoning system with the capability to learn from mistakes.

Dr. Wang jokingly said that he is often proud of his system when it makes a mistake, because then it can learn from it. This is an interesting concept because, as was stated at the lecture, the system often resembles the learning of a toddler. For example, a toddler might make the incorrect assumption that, since an entire family wears glasses, and he does not, he is not a part of the family. There is logic behind this assumption, even though it is untrue.

One of the things that makes a system able to do this learning, is the implementation of defeasible rasoning. This is where one can say that if x is true, then it stands to reason that y would also be true. The difference between that and the deductive reasoning system that had been used in the past is that there is a chance that a computer's deductions are untrue. This makes many more things possible than were before, because a computer has the chance to amend its previous assumptions.