Posted
by
CmdrTacoon Wednesday March 31, 2010 @11:57AM
from the no-excuse-for-haley-joel-osment dept.

aftab14 writes "'What's brilliant about this (approach) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London. 'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, "No, no, just tell me a few things," and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.'"

Since the actual summary seems to involve a fluff filled soundclip without anything useful, here's the run down of the article.1) We first tried to make AIs that could think like us by inferring new knowledge from existing knowledge.2) It turns out that teaching AIs to infer new ideas is really freaking hard. (Birds can fly because they have wings, mayflies can fly because they have wings, helicopters can... what??)3) We turned to probability based AI creation: you feed the AI a ton of data (training sets) and it can go "based on training data, most helicopters can fly."

4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go"100% of birds in training set can fly. Thus, for a new bird there is a 100% chance it can fly""Oh ok, penguins can't fly. Given a random bird, 90% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80% chance." and so on and so forth.5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.

6) ???7) When asked if sparrows can fly, the AI asks if it's a European sparrow or an African sparrow, and Skynet ensues.

Again, as I bring up often with AI researchers, we as humans evolved over millions of years (or were created, doesn't matter) from simple organisms that encoded information that built up simple systems into complex systems. AI, true AI, must be grown, not created. Asking the AI if a Bat is a mammal and can fly can a squirrel? ignores a foundation of development in intelligence, our brains were created to react and store, not store and react from various inputs.

Ask an AI if the stove is hot. It should respond "I don't know, where is the stove?" Rather AI would try and make an inference based on known data. Since there isn't any the AI on a probablistic measure would say that blah blah stoves are in use at any given time and there is a blah blah blah. A human would put thier hand (a senor) near the stove and measure the change, if any in temperature and reply yes or no accordingly. If a human cannot see the stove, and had no additional information either a random guess is in order or a "I have no clue." response of some sort. The brain isn't wired to answer a specific question but it is wired to correlate independent inputs to draw conclusions based on the assembly and interaction of data and infer and deduce answers.

Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION". The rest of the information is stored and additional threads may be created as the environment generates other links but if the AI is paying attention to the conversation then the TTL for the new threads and links should be short. When the conversation mentions the LAMP the information network should link the LAMP information to the CONVERSATION thread and provide the AI additional information (that was gathering in the background) that travels with the CONVERSATION thread.

Now the conversation appears to be about the lamp and wheather it goes with the room's decor. Again the links should be built adding, retroactively the room's information into the CONVERSATION thread (again expiring information that is irrelivant to a short term memory buffer) and ultimately since visual and verbal queues imply that the AI's opinion is wanted should result in the AI blurting out, "I love Lamp."

what? He specifically stated birds. Not Animals, or inanimate objects.

What if I tell it that a 747 is a bird?

This is very promising. In fact, it may be the first step in creating primitive house hold AI.

Very, very promising indeed.

Now, I can mess with the AI's mind by feeding it false information, instead of messing with my child's mind. I was worried that I wouldn't be able to stop myself (because it's so fun), despite the negative consequences for the kid. But now I have an AI to screw with, my child can grow up healthy and well adjusted!

Maybe "axilmar" is more interested in the ethics of AI than commercial gain. Maybe "axilmar" is getting ready to create a free cylon project that will eventually be completed by a Scandinavian student. Although "axilmar" never completes his own project, he'll consistently complain about the name of the newer, complete, more popular project and its derivatives. "Axilmar's" efforts will shift to creating and running the Free Cylon Foundation (or FCF). He spends the majority of his time give strikingly similar speeches over and over around the world. Despite the absolute consistency of his message he and by association the FCF are increasingly seen as a fringe political group. Despite the FCF's best efforts to promote the rights of the Cylons and hope for peaceful coexistence, the world's civilization eventually falls into chaos as the Cylons engage in war against humanity. Not long before his death at the hands of a cylon as he tries to convince the cylon that he's more righteous than other humans, "axilmar" is overheard muttering some complaint about a printer...