Posted
by
CmdrTaco
on Tuesday February 08, 2011 @04:20PM
from the i-can't-even-people-learn dept.

An anonymous reader writes "Augmented reality is already adding digital information to the world around us — but the next step to making it truly useful will be when it starts to use elements of machine learning to understand the real world, Mike Lynch, boss of machine learning software specialist Autonomy told silicon.com — also explaining machine learnings links with the theorems devised by 18th century cleric Thomas Bayes."

(Disclaimer: I have nothing to do with this site, and it is non-commercial as far as I can tell.)

This leads me to pimp my favorite new site/game/lesson... what is this? It's cool, that's all. Check out this neat implementation of a genetic algorithm to produce a cool demonstration computer-generated evolution: http://www.boxcar2d.com/ [boxcar2d.com]

Wow, I already knew about that 2D car experiment but just re-discovered it with a far more interactive design, thanks!

Someone should make an app for that (not kidding) that lets you be the designer of a car and makes your car compete against other user's designs, producing an online top ranking, friends ranking (you have been ousted!), etc... Maybe there is even something more sophisticated than building 2D cars that would make a great game...(Yes i know the website lets you design a car, thats where I got

This reminds me of an old simulation like this that was done in 3d. At the time it had to be run on a uni mainframe, but this was years ago... would love to find that and see what could be done with my desktop rig.

One day some fool will ask a machine to figure how to rescue the environment and fix it for us, and the machines will figure out that humans are the ones who destroy it... let's hope that Newton's Laws come standard by that point, otherwise we'll be in deep deep trouble.

"current state of it is pathetic"AI beats humans at chess and jeopardy. It solves difficult puzzles much quicker than you or I could. Maybe it's "pathetic" compared to what you would like it to be, but it's far from pathetic.

"there is no significant research going on"Please do the tiniest bit of research before posting garbage like this. Again though, you include another relative term like "significant" to cover yourself.

AI is always in the future, almost by definition. As soon as any new "artificial intelligence" algorithm goes mainstream, and starts to be incorporated into real products, people tend to stop thinking of it as AI and more of "just what those computer thingies do." So it's you who keeps changing the definition of what AI is, it's not a lack of forward progress.

AI tends to be "whatever the most advanced stuff is that people are working on." I hate the terminology, as do you apparently, but a problem with term

So it's you who keeps changing the definition of what AI is, it's not a lack of forward progress.

It depends on your perspective. Lot's of things that once would not have been considered AI now fall under that umbrella. Expert systems come immediately to mind. From that point of view, it's the field of AI that keeps changing its definition.

What the GP likely considers AI is the original goal of AI -- to create intelligent machines. To that end, the GP is correct, we're not any closer to solving that problem than we were in the 1950's.

I wouldn't call Watson pathetic. Also, how far have we already pushed the boundary of 'intelligence'? I mean, playing chess or basically anything easier than go is no longer considered intelligence, limited parsing of sentences is no longer considered intelligence. How long before we find out that nothing is left to be called 'intelligence'?

I mean, playing chess or basically anything easier than go is no longer considered intelligence, limited parsing of sentences is no longer considered intelligence. How long before we find out that nothing is left to be called 'intelligence'?

You're missing the point. Currently all the AI out there is Top down AI and they can hardly be considered Intelligent and over emphasis the Artificial. Show me an AI that learns to play Chess all on its own starting from nothing but the board and pieces, then I'll consider it intelligent. What you are pointing out as AI are really nothing more then massive lookup tables. Heck even most GA are nothing more then an elaborate way to create a lookup table that will work for a given game or condition. Deep

You are under the commonly held impression that intelligence requires some special magical ingredient. It does not. The human brain works within the laws of physics. It is simply a machine, albeit built out of organic material. While we don't fully understand all the details, we have made great leaps in the understanding of the mechanisms that drive it over the years. We haven't managed to create a machine with a generalized "intelligence" on par with a complete human brain yet, but we have been nibbling at

You are under the commonly held impression that intelligence requires some special magical ingredient.

I don't see how you can come to this conclusion given what the GP has written.

The statement that somehow because the technology uses "lookup tables" or some other mundane algorithms as part of it's logic disqualifies it from being intelligent implies that it has to perform it's task in some "special" way. For example, any reasonable person would consider the AI in modern speech recognition systems to be far more sophisticated and advanced than it was 40 years ago. The GGP indicates that he/she would not consider this intelligence because it's "Top Down AI", by which I assume he/she m

he statement that somehow because the technology uses "lookup tables" or some other mundane algorithms as part of it's logic disqualifies it from being intelligent implies that it has to perform it's task in some "special" way.

Doesn't this (including the unquoted remainder of the paragraph and the paragraph that follows) broaden the definition of AI to include nearly all information processing? I'm not sure where you're drawing the line between what is AI and what is not.

It is entirely possible that the GP simply holds a narrower view, not that he believed intelligence to be magical.

it is still considered intelligent if it's behavior is intelligent.

This is a philosophical statement to which I'm not certain I can agree without a better definition of intelligent behavior. If the behavior of the

Doesn't this (including the unquoted remainder of the paragraph and the paragraph that follows) broaden the definition of AI to include nearly all information processing?

No, it doesn't, although there is not universal definition of what AI is, so it tends to be whatever the speaker says it is. AI is really a field of computer science, but it receives as input, the disciplines of psychology, cognitive science, neurophysiology, mathematics and many more.
AI is not an particular technique or algorithm. The only true test of whether something is intelligent is through it's behavior. It really goes back to the Turing test (http://en.wikipedia.org/wiki/Turing_test). If the appear

If the appears to the user to exhibit intelligence, it is in fact intelligence.

The Turing test is not only highly specific, it's more than a little contentious.

Your particular interpretation of the TT has lead you to develop this strange method of assessment that lets you ascribe intelligence to virtually anything. Not only to my thermostat, but equally to a teaspoon or a cup of coffee.

Given your bizarre beliefs about what constitutes intelligence, I don't see any way we can come to any agreement or even mutual understanding.

I do in fact hold a narrow view of what Intelligence is but it is deceptively simple. You must be able to learn as you go in order to be intelligent. Lets take something that is simple like Tic-Tac-Toe. I have an abstract move that works against most humans if they've never seen it before and just about all AI. If I use it against a human and it works once I might be able to get 1 or 2 wins out of it but after that I will be blocked every

In the case of you thermostat, yes, I would consider it to have a certain level of intelligence.

Statements like this is why it is hard to take AI fanatics seriously. This statement reveals a form of anthropomorphism at best or an attempt at definitional ambiguity at worst. A thermostat is not intelligent. A thermostat (old style) takes advantage of materials science to make a switch close. Modern thermostats take other inputs into account (such as time of day or pricing information). However, the thermostat is not deciding(1) to change the temperature. It is opening or closing a switch based on

I doubt many of us that have actually worked in the field of AI really care whether you take it seriously or not. The point about intelligence that, try as I might, I can't seem to get across here is that intelligence is the measurement of the amount of reasoning or decision making capability an entity has. There is no concept of "this is intelligent while this is not" but rather the concept is "this is more intelligent than that". If an entity has the ability to receive input and based on that input make

intelligence is the measurement of the amount of reasoning or decision making capability an entity has.

I can agree to using this definition with the caveat that making a decision and reasoning both require freedom of action. A thermostat does not have freedom of action.

When you make a decision do you not think there is some underlying set of chemical and electrical processes that occur that generate that decision?

How humans generate decisions is a red herring. When the thermostat turns on the furnace it is the builder/user of the thermostat that made the decision.

Obviously if there were just programmers in the back room making all the decisions that wouldn't have needed a multi-million dollar computer. While I agree that having the programmers involved during the games taints the results, the programmers were merely tweaking the algorithms

I think you misunderstand. The decisions that I was referring to as made by the programmers are precisely the decisions of what algorithms to use for the evaluation function, how to trim th

This discussion has gone on far longer than it should have, but I just have one more comment.

I can agree to using this definition with the caveat that making a decision and reasoning both require freedom of action. A thermostat does not have freedom of action.

What does freedom of action mean? Unless you ascribe to some spiritual intervention, every decision you make is the result of physical processes occurring within your brain. You can't escape this fact. No matter how you might wish it otherwise, every thought you have is the product of basic electrochemical processes so what you think of as "freedom of choice" is really an illusion. Excluding the effects of quantum m

With the thermostat example, you're saying that a human is intelligent because it can choose to not lower the temperature at night even though it would save energy while the thermostat would have to always lower the temperature to save energy.

So basically your definition of Intelligence is the ability to choose to act non-intelligently?

With the thermostat example, you're saying that a human is intelligent because it can choose to not lower the temperature at night even though it would save energy while the thermostat would have to always lower the temperature to save energy.

A human may choose to not lower the temperature for any number of reasons, including: being cold. Intelligence can react to unanticipated situations, the thermostat cannot.

So basically your definition of Intelligence is the ability to choose to act non-intelligently?

There are two problems with this statement as a statement. The first is ambiguity in the use of the word intelligent, where in the second usage you are using it as a synonym for rational. Intelligence does not require that one always be rigidly rational. The second is that it is a strawman.--JimFive

Also, the ability to learn is not a prerequisite to intelligence. It's definitely a "nice to have" feature so that the existing intelligence can be enhanced but it's not a requirement. Even an amoeba has a certain amount of intelligence imparted to it through genetics. It has the ability to sense where more nutrition is in it's environment and then use it rudimentary locomotion to move to the area of higher nutritional concentration. It has no ability to learn, but it does exhibit this very basic intelligent behavior.

You are cheating by redefining "intelligence". In fact, no normal person would call an amoeba intelligent, and there would be widespread resistance to calling anything other than human beings intelligent. Chimps, dogs and dolphins have been described as intelligent, but even this is by no means universally accepted.

We will have true Artificial Intelligence when machines can do things that human beings do, not single cell organisms.

Also, the ability to learn is not a prerequisite to intelligence. It's definitely a "nice to have" feature so that the existing intelligence can be enhanced but it's not a requirement. Even an amoeba has a certain amount of intelligence imparted to it through genetics. It has the ability to sense where more nutrition is in it's environment and then use it rudimentary locomotion to move to the area of higher nutritional concentration. It has no ability to learn, but it does exhibit this very basic intelligent behavior.

You are cheating by redefining "intelligence". In fact, no normal person would call an amoeba intelligent, and there would be widespread resistance to calling anything other than human beings intelligent. Chimps, dogs and dolphins have been described as intelligent, but even this is by no means universally accepted.

I'm not redefining intelligence. My point was that intelligence is a relative term. Compared to a rock, yes an amoeba has intelligence. I don't think there is really any debate (at least in the scientific world) over whether chimps, dogs or dolphins have intelligence. Any animal that has a cerebral cortex (which includes all mammals) has intelligence by even the most strict (scientific) definition. That is not to say that they are all equally intelligent, which was the point I was originally trying to make

Show me an AI that learns to play Chess all on its own starting from nothing but the board and pieces, then I'll consider it intelligent.

Umm, can you point ANY human that can learn chess just starting from nothing but the board and pieces. You do realize that you have to teach humans how to play chess as well.

Show me a human who could do that and i wouldn't call them Intelligent, i would call them psychic!

It's pretty trivial to teach a computer the rules of chess and have the computer build a database of moves on it's own, actually a lot simpler and a lot quicker than having a human do exactly the same thing. It might take a lot of wor

Time and time again I see this type of comment on Slashdot when a discussion of AI comes along and sometimes it gets modded up, but christ it's so fucking ignorant.

There is strong AI, and weak AI. Strong AI is a long term goal, it's about producing a human like intelligence, or even better, we are nowhere near this, and slagging off the field because we're nowhere near this is like slagging off Phsyics as a field because it hasn't built a complete grand unified theory of absolutely everything yet.

If you can get a computer to do [those tasks] then that's a phenomenal saving, and it frees up the human to do something more interesting.

Right. That's what's been happening. Humans have been freed up to do more interesting things, and for more pay, too. Uh huh.
So, the more we make machines do more of the work people do, the more interesting work there is for the rest of us? Those of us who don't own the machines? Those of us who need to make a decent living? Does this guy live on planet earth? Can it be that in 2011 there are still people in decision-making positions who still believe that?

well, struggling to live on the street and squatting in flophouses can be considered "interesting;" he didn't say "fruitful."

an ayn rand quote would be all too easy to find, so here's one from the radical left: ""Down with a world in which the guarantee that we will not die of starvation has been purchased with the guarantee that we will die of boredom."

This is a complaint about how wealth is distributed, not a complaint against progress.

No, I'm serious, and not being snarky -- for many people already in positions of power, "progress" means them getting more [desirable noun]. So while the recent global financial meltdown set many of us back considerably, it has still been deemed as "progress" by the financial elite, at least as I've been reading in the media. For that matter, I've been reading and hearing for over a year now about how the economy is supposedly doing better and better, i.e. "progressing", but I have yet to see my personal

Your point started off being that 'more interesting = more pay' is not sustainable. That's true. If money is based on any sort of 'scarce' standard, then you'll run out if you increase the pay-grade of everyone phased out by machines. What's really supposed to happen however is that the minimum wage jobs shift, so everyone goes DOWN a pay-grade.

The 'idea' is supposed to be that when machines are doing the menial tasks, like 'farming', then the cost of living will go down for everyone. After removing

You're simply repeating the standard naive argument from 60 or more years ago. Eliminating "menial" labor, more commonly called "blue collar jobs," is neither scalable nor survivable. Those people will not become engineers, scientists, professionals, or "white collar" employees as your model will effectively require. While many products and services may diminish in price, a great many people will become under- or unemployed. The poverty line will go up, not down. Beware of simply accepting pop-culture notio

Personal service to the few very wealthy will not be enough to replace all the jobs lost, especially as services jobs become automated as well. -The automated checkout lines that exist today is really just the beginning for the service industry, it's not going to stop there..I do think that replacing manual labor with robotics is a good thing, but in our present economic system, it spells disaster for a huge portion of the population.. We need to make sure that the increased productivity caused by automatio

You are overlooking the fact that under a proper socialist system the wealth created by the worker robots would be shared amongst the entire population, not reserved for rich people who privately owned the means of production.

This is because you can only conceive of a capitalist system where you are defined by a combination of the money you have and the "productive work" you do.

If most people no longer have jobs, so fucking what? Here, you are biased by some version of the protestant work ethic, where

You are so off the mark it is almost cute. First and foremost, the trend towards replacing human labor with robots is occurring under a corporatist plutocratic regime, not some ill-defined "socialist system." The fruits of the robotic labor serve the plutocracy, who own the means of production. This will not change in the foreseeable future.

This is because you can only conceive of a capitalist system where you are defined by a combination of the money you have and the "productive work" you do.
Now that's

You're simply repeating the standard naive argument from 60 or more years ago. Eliminating "menial" labor, more commonly called "blue collar jobs," is neither scalable nor survivable. Those people will not become engineers, scientists, professionals, or "white collar" employees as your model will effectively require. While many products and services may diminish in price, a great many people will become under- or unemployed. The poverty line will go up, not down. Beware of simply accepting pop-culture notions of capitalism, they are wrong. Many counter-intuitive results will come from making machines do all the work. Those who don't own robots will be increasingly unable to participate in the economy.

You haven't given me any reason to think that his 'naive' argument is not correct other than you saying so. Is life worse now than 60 years ago because automation has replaced people in a number of menial tasks? I don't see it. I can buy an iPod for a small amount of money because the factories that create them are largely automated, and the ships that transport them from there to here are largely automated, and the packaging and delivery system is largely automated. The same applies to food, and clothi

You have in effect answered your own question with an attempt to qualify your claims:

If, and this is a huge if, almost all tasks that required human intervention for 'menial' tasks was taken over instantly by robots, then yes we would have a problem. A huge percentage of our workforce would suddenly not have a job, we'd have social unrest, etc. But the way that it's been happening for the past couple hundred years is that the automation has been creeping, slowly replacing tasks. Yes, people who used to wo

I think we need to think about radically reforming the economy as automation becomes more and more common. Eventually only creative work will be available to humans, and while creative work is great I doubt it can provide enough jobs for the entire population. If we don't do something radical to make sure everyone shares in the fruits of the increased productivity of society, we will have a huge permanently unemployed underclass, some middle to upper class workers and a few massively wealthy owners of the a

Why has this not happened in the past? There has already been an enormous shift in the types of work that people do. Previously, most people did menials tasks on farms, and now they do not. After farming was manufacturing, and that is largely automated (though not entirely obviously, cf China). What are people doing now that they did not do then? How did that shift happen such that we did not have 50% unemployment? What does that imply for the future? I think that it means that people will still be p

Why hasn't it occurred? Because powerful computing hardware has never been so cheap and abundant. That is the new, disruptive change. It still growing by leaps and bounds. You can already buy cards with 64 cores running linux [tilera.com] and put them in your PC or robot. Mobile devices are already going multicore [linux-mag.com]. Distributed machine learning [ieee.org] is already a reality. Those things did not exist before. and that is why there hasn't been 50% unemployment due directly to automation. Forget Asimov and Bradbury, they did not fo

If I could go back and do it all over again I think I would spend my entire life trying to figure out how a mosquito's brain works. There must be research along these lines happening somewhere but you never hear about it - they are always trying to map out mouse brains, or some other small mammal.

Why so ambitious? Start small - if a computer program could be made that perfectly imitates a mosquito it would be a huge breakthrough.

If I could go back and do it all over again I think I would spend my entire life trying to figure out how a mosquito's brain works. There must be research along these lines happening somewhere but you never hear about it - they are always trying to map out mouse brains, or some other small mammal.

If the assumption is once you're done with this project, you'll move on to human brains, then a mosquito has some pretty severe I/O differences compared to a typical mammal human brain.

The best work I know of is on the sea slug, Aplysia californica. Do a google search for neural system, simulations or models of it. A lot of work has gone into determining the types and connections for every neuron in the organism, where they came from developmentally, what they do and how they work. There are about 19,000 neurons. We do not have a complete model for it yet. So, we're a really, really long way from doing a mosquito brain, though I'm not sure how many neurons they have. Honeybees have

What they did was a simulation of a neural network with same number of neurons as a cat cortex. The cortex is only part of the brain and just simulating a bunch of neuron isn't the same as simulating the functionality. That's still a long, long way from being on par with an actual cat brain.

I remember this story - the catch was that they simply (ha) set up a software brain simulation which had enough numbers of neurons and synapses (from the article: 760 million, 6 trillion, respectively) to put it above cat-scale, but as far as I can tell no actual attempt was made to virtually render a living brain in a computer.

The brain of an ant contains a mere 250K neurons, seems like it would be a cake walk after the cat-scale exercise:)

Thanks for both of your replies - interesting stuff. Someone else posted this link [ieee.org] in reply, it goes into pretty fine grained detail about efforts to model a fruit fly brain, pretty fascinating. The article points out that in addition to the extreme complexity of the actual connections, it is even more complex in that "firings" of the neurons aren't simple on/off firings, they can each fire at at different percentages. It also goes into the storage space required to store what they find - it's staggering

Wow, thank you. I should have been a little more diligent in my googling on this topic, or at the very least posted something about it on slashdot earlier:)

In all of the links to various attempts at modeling "simple" brains that I've been given it becomes clear that even the most rudimentary of brains requires an incredible amount of time and effort - in addition to massive amounts of storage space. I still believe that this path is the one that will eventually lead us to true AI and an understanding of

I easily say what I look forward to, and it will come from a combination of machine learning, human input, structured and unstructured information: the ability to look at something and know how it works, what it's made of, where it came from, who's involved with it. I mean, not having to google/wikipedia every interesting aspect, but having it show up translucently in front of what you're looking at.

This would be especially interesting for complex things like computers, electrical devices, organisms.