How robots think: an introduction

Computing's early pioneers anticipated that robots would come a lot further, a …

Fritz Lang's 1927 German expressionist film Metropolis gave us an early 20th century vision of the modern robot (and the inspiration for C-3P0).

A future full of helpful robots, quietly going about their business and assisting humans in thousands of small ways, is one of technology's most long-deferred promises. Only recently have robots started to achieve the kind of sophistication and ubiquity that computing's pioneers originally envisioned. The military has hundreds of UAVs blanketing the skies above Iraq and Afghanistan, and Roombas are vacuuming living rooms across the country. At the bleeding edge, there's the DARPA Grand Challenge in 2005. This grueling, 140-mile, no-humans-allowed race through the desert showcased full-sized, completely autonomous robot cars that could navigate across rugged desert terrain, avoiding rocks and cliffs and cacti in a race for a $2 million cash prize. The follow-on 2007 Urban Challenge went even further, with the robotic competitors required to drive alongside humans on crowded roads, recognizing and avoiding other cars and following the rules of the road. Suddenly, the robotic future doesn't look so far off.

In some ways, the remarkable thing is that it took so long to get here. In the 1960's, researchers in artificial intelligence were boldly declaring that we'd have thinking machines fully equivalent to humans in 10 years. Instead, for most of the past half-century, the only robots we saw outside of movies and labs were arms confined to factory floors and were remotely operated by humans. Building machines that behaved intelligently in the real world was harder than anyone imagined.

Robot cars lined up at the starting line for the 2005 DARPA Grand Challenge race.

The biggest challenge for robots then and now lies in making sense of the world. With perfect information, many of the hardest problems in robotics would be nearly trivial. We've gotten very good at building and actuating robots, but in order for them to use their abilities to the fullest they need to make sense of their surroundings. A robot car has to know where the road is and where other cars and people are. A robot servant needs to be able to recognize household items.

Today's robots are starting to be able to make these difficult determinations. The question we're here to answer is: how? What allowed robots to go from blind, dumb, immobile automatons to fully autonomous entities able to operate in unstructured environments like the streets of a city? The most obvious answer is Moore's Law, and it has certainly been a huge factor. But raw processing power is useless without the right algorithms. A revolution has taken place in the robotics world. By embracing uncertainty and using the tools of probability, robots are able to make sense of their surroundings like never before.

In this article, we'll explore how robots use their sensors to make sense of the world. This discussion applies mostly to robots that carry an internal representation of the world and act according to that representation. There are lots of successful robots that don't do such "thinking": the military's UAVs are mostly remotely piloted, linked by an electronic tether to human eyes and brains on the ground. The Roomba does its job without building a map of your house; it just has a series of simple behaviors that are triggered by timing or bumping into things. These robots are very good at what they do, but to autonomously carry out more complicated tasks like driving, a robot needs to have some understanding of the world around it. The robot needs to know where it is, where it can and can't go, and decide what to do and where to go. We'll be discussing how modern robots answer these questions.

Sensing and Probability

As it turns out, the big challenge in many robotics applications is the same: it's easy to do the right thing, but only if you know what the right thing is. We've known how to steer a car automatically for a long time. What's hard is knowing where the road is and whether that shape by the road is a fire hydrant you can ignore or a child about to run across the street. To operate in an unstructured environment, a robot needs to use sensing to understand the state of the world relative to itself. Sensing is the key to successful robots, and probability is the key to successful sensing.

Sensing is difficult because the world is a complicated, unpredictable place. Remember that the robot doesn't get to "see" reality directly. It can only take measurements through its sensors, which don't perfectly reflect the true state of the world. Just because your sensor tells you something doesn't mean it's true. For example, GPS position measurements can jump by several meters, even when the receiver is stationary. Some things aren't even possible to measure directly; if you're trying to distinguish between a person and a cactus, there's no sensor that directly measures "humanness." You have to look at different measurable properties like shape and size and so on to infer if you're seeing a person.

A robot doesn't directly "see" the true state of reality. Instead it must infer the state from noisy sensor measurements.

Robotic sensing is like Plato's allegory of "the cave": there are people moving outside, but our robot is the prisoner chained to the wall, only able to see the shadows those people cast. The movements of the people outside are the true state, and the shadows are our robot's sensor measurements: we have to infer what's actually happening by observing the shadows. This process of inferring the true state of the world, the process of updating what we believe is true based on what our sensors tell us, is called state estimation.

So I really like the article but I feel like camera's are not the best sensor to use for the "Camera-Based Navigation" section of the article. Laser scan map matching fits your example diagrams much better then vision based methods. I am working on a project which needs some basic indoor localization that is very similar to what you describe in that section of the article using CARMEN. Essentially the robot is driven around the environment ahead of time and creates an overhead map, upon which future scans are matched.

From my (somewhat limited) experience vision in cutting edge robotics is used much more for object recognition or movement estimation rather then localization in an existing map. Basically SIFT + Bags of Features + Machine Learning is magic.

"It's too early to say for sure, but might the human brain be a giant neural calculator, running Bayes' Rule on probability distributions that we've been learning all our lives?"

Interesting thought that reminds me of "On Intelligence" by Jeff Hawkins. I can't say that I completely understood everything discussed in this article, and nor did I completely understand Mr. Hawkins' book, but there does seem to be some overlap. Specifically, he argues that the brain really just runs a single algorithm over and over, and that this algorithm (as I recall) was really just a comparison between stored memory and sensory inputs. Really fun read if you have the time.

However, I think you may have a typo on page 2 concerning the reference to the left hand side of the equation. "P(X = x | Y = y)" should probably be " P(X = x | Z = z)", at least for the sake of clarity.

bvz:There is certainly some overlap with regard to the inferences that go into decision making(or even sensory formulation). Hawkins is addressing a much more expansive neural paradigm, for which Bayes' rule seems an obvious candidate for modeling purposes. But it's important to keep in mind that Bayes' rule can just as easily be applied in architectures that don't exhibit the distributed, top-down bottom-up neural mechanisms described in his work. It's always important to keep in mind that a model(mimicking output) doesn't necessarily reflect the deeper reality of a thing's operation, so I tend to err on the conservative side when drawing relations between the brain and our successful models of prediction.But I thought it was a really enjoyable book!

Computers and robots are no more "intelligent" than they were 50 years ago. Sure, they seem "smarter", but that's because of some clever tricks of programming, and vastly more computing power being thrown at narrow, specific problems. But as far as any kind of real "intelligence" goes, the best we have been able to manage has been something just a little bit short of the "intelligence" of a cockroach. And that's not very.

Face it. While all the clever algorithms have helped tremendously at particular tasks (like navigation), today's "intelligent" vehicle that navigates will never learn how to be, say, a bartender too. Much less any other kind of "intelligence" that can learn arbitrary new rules and tasks and carry them out! The very fact that you can explain how these machines do what they do in a few short pages worded for the layman is prima facie evidence that there is nothing remotely like "intelligence" going on here. Nor anywhere in the robot world, for that matter.

Of course they are getting better at their assigned tasks. I don't deny that. I am a programmer, after all, and I have seen the advances. But intelligence, or anything like it, they are not. I am not nitpicking; the word has been used improperly for years and I want to clearly point out that this is one of those occasions.

Maybe someday we will get to the point of creating machines that might be said to be intelligent. But we are not approaching it now. EVERY avenue of creating something like intelligence that we have tried has so far been a dead end. Maybe one day we will stumble upon it and there will be a big revelation. But what we have been doing these last decades has just been incremental improvement in the same old things. If intelligence were simply a matter of quantity -- just more of the same -- I think we would have seen some real progress by now. But I assert that it is not. There is a quality that the current large quantity lacks.

Neither do people. Everything between Cubism and Phenomenology pretty well busted that myth.

I'll have to echo Jane Q on intelligence, too. Likening the human brain to a contemporary computer is where most of the conceptual errors are planted. Any true intelligence resembling ours will, at least, require something very near to a mammalian biology, an affinity toward social structures, and a body of sensory experiences shaped by needs and desires. Anything else will be too alien to recognize or a mere simulation, totally lacking an inner world.

Actually, Jane Q. Public has a point, though she doesn't have it all right.

There has been a big debate in the philosophy of mind on the notion of "strong AI". The idea of strong AI is that of an artificial intelligence which truly understands things. There is a difference between that and mimicry.

Here are two problems for strong AI, though it only begins to scratch the surface:

1. All modern AI comes about from a binary system of calculation. However, any binary system of calculation can be realized using any physical system. For example, instead of electricity, water could flow through pipes. If a program uses binary calculations, it could theoretically run on a field of rocks arranged to indicate 1s and 0s. So, if we are to admit that a software program can have strong AI, we have to admit that a field of rocks could be self-aware to the same extent.

2. I cite Chalmers in his book "a conscious mind":"Searle has argued that implementation is not an objective matter, but instead is "observer-relative": any system can be seen to implement any computation if interpreted appropriately. Searle holds, for example, that his wall can be seen to implement the Wordstar word processing program."(Full disclosure: Chalmers then goes on to try to refute Searle's thesis)

These difficulties aside, I disagree with Jane in that I think computers *are* becoming more intelligent, insofar as a toaster can be more intelligent than a grill (it has a system of memory and timing, for example). But there is the problem of consciousness.

I was excited to see this article, but I was disappointed to find that the author teases the reader with certain AI possibilities without even mentioning the philosophical debate that puts it in question. I don't mean that the article has to be exhaustively and indigestibly philosophical, but it could have alluded to the difficulties if it's going to make philosophical claims about the connection between human and computer intelligence.

If that's the case, then computers can't be smart. If submarines "swim" it's only metaphorically. (I realize this is probably what you meant, but I'm making it explicit)

Information processing is not what makes us intelligent. A chess AI was written specifically to be good at chess. Its "intelligence" is that of the programmers, not its own. The flocking algorithm isn't smart, its programmers were. They are good milestones for utility, but not measures of intelligence.

If the goal posts seem to be shifting, it's because some AI programmers are settling on ridiculously anaemic definitions of intelligence like "can translate into Chinese passably" and "can predict how a tree would grow on Mars". You'd be hard-pressed to find a philosopher or psychologist who would accept such a myopic view. What's the threshold? How many FLOPS to be intelligent?

And just because a person would have to be smart to do it, doesn't mean the machine is. If one person can't model particle collisions or solve static systems in seconds but another can, does it mean they're not intelligent? Maybe the former isn't in the "Gee whiz you're smart!" sense, but they certainly are in the "I have subjective experiences and give meaning to my world" sense.

Very interesting article. I'm a programmer by trade and I've dabbled a bit with AI as an undergrad, in neural nets, rather than probabilistic models. I'm hardly an expert in statistics, but from what I can see, all these models are apparently assuming a markov process, i.e. next state's probability is dependent only on current state - wonder if that's realistic.

Regarding intelligence - I think it's pretty unfair to compare the "top of the line" biological brain with these robots. A more fair comparison - and more reasonable goal - would be a much "dumber" animal (like an insect), which I could argue are also specialized in specific tasks.

From my (somewhat limited) experience vision in cutting edge robotics is used much more for object recognition or movement estimation rather then localization in an existing map. Basically SIFT + Bags of Features + Machine Learning is magic.

There is some vision-based localization work going on, particularly in the SLAM area. For example, the work of Andrew Davison and his research group on real-time monocular SLAM. (That page is also linked to on the Wikipedia page you linked )

Actually, Jane Q. Public has a point, though she doesn't have it all right.

There has been a big debate in the philosophy of mind on the notion of "strong AI". The idea of strong AI is that of an artificial intelligence which truly understands things. There is a difference between that and mimicry.

Here are two problems for strong AI, though it only begins to scratch the surface:

1. All modern AI comes about from a binary system of calculation. However, any binary system of calculation can be realized using any physical system. For example, instead of electricity, water could flow through pipes. If a program uses binary calculations, it could theoretically run on a field of rocks arranged to indicate 1s and 0s. So, if we are to admit that a software program can have strong AI, we have to admit that a field of rocks could be self-aware to the same extent.

2. I cite Chalmers in his book "a conscious mind":"Searle has argued that implementation is not an objective matter, but instead is "observer-relative": any system can be seen to implement any computation if interpreted appropriately. Searle holds, for example, that his wall can be seen to implement the Wordstar word processing program."(Full disclosure: Chalmers then goes on to try to refute Searle's thesis)

These difficulties aside, I disagree with Jane in that I think computers *are* becoming more intelligent, insofar as a toaster can be more intelligent than a grill (it has a system of memory and timing, for example). But there is the problem of consciousness.

I was excited to see this article, but I was disappointed to find that the author teases the reader with certain AI possibilities without even mentioning the philosophical debate that puts it in question. I don't mean that the article has to be exhaustively and indigestibly philosophical, but it could have alluded to the difficulties if it's going to make philosophical claims about the connection between human and computer intelligence.

Here's the thing, though - the brain is a realized physical system of networked charge paths and potentials. The basic functions are, in fact, binary - either a neuron is firing down a synapse, or it's not. Given this, to believe that there is some fundamental barrier between what AI could accomplish and "intelligence" is to believe that intelligence arises from something other than physical processes.

If that's the case, then computers can't be smart. If submarines "swim" it's only metaphorically. (I realize this is probably what you meant, but I'm making it explicit)

Information processing is not what makes us intelligent. A chess AI was written specifically to be good at chess. Its "intelligence" is that of the programmers, not its own. The flocking algorithm isn't smart, its programmers were. They are good milestones for utility, but not measures of intelligence.

So intelligence can only exist if it independently evolves?

If you're going to define intelligence such that anything people create can't be intelligent, but only reflect the intelligence of its creators, then yeah, there will never be strong AI. But I think most people would consider that an absurd definition.

So for it to demonstrate intelligence, for you, and I agree, it would have to be able to self change its code as it makes observations and make decisions that it was not programmed to handle, right?

For example, the car robot that has no experience or knowledge of dogs would have to discover them and then make rules as how to handle them, correct? Or if it came into a conflicting situation where it is inevitable that it will hit either a dog or a person and choose which to hit, when it has never encountered a dog or a person before (the desert obstacle course car that knows cactus and rocks only)?

It's awesome to see an article laying out some basics of robotic development. However, as a roboticist (and one that has done a fair bit of perception) I have to protest the description of all perception as probabilistically based. Certainly, there are many approaches out there which use probabilistic methods to perform various tasks, it is certainly not the only or even mainstream approach. Several of the DARPA Grand Challenge vehicles used deterministic approaches to perception and world modeling. I was part of writing several such algorithms that were used myself. Nitpicky to some extent, but I wanted to point out that it isn't all based on Bayesian reasoning.

As for whether Strong AI can be built or not, the way I look at it is that we have yet to find any evidence that we can't do so, other than that we have yet to do so. But the way I currently figure it, the most powerful robot systems I've worked with had almost the computational power as an earthworm (that estimate comes from several talks at FR25/RED60). That's a very loose estimate, but the flipside? That robot was Boss - and was capable fully autonomous city driving. In those terms, we're getting quite a bit of bang for the computational buck. Once we get computational power up to an actual mammal, let alone a primate, I would bet we can do more than "one task" well.

Intelligence is a set of tools to get a job done, and what you need is the minimum toolset needed to address the maximum amount of potential barriers. A robot doesn't need to be all that 'smart' for some for very specific tasks.

For example, a robot car doesn't care if it sees a person or a catcus, it doesn't want to run into either one! More to the point it wants to know if the object it sees can move and potentially be in danger of being hit. This is where a robot can be both superior and inferior to a human, a human knows from past experience that a catcus won't move and can ignore it, a robot must assess. But if the catcus DID move, the human may very well run into whereas the robot would recognize it as moving catcus-beast and not turn it into road kill.

Here's the thing, though - the brain is a realized physical system of networked charge paths and potentials. The basic functions are, in fact, binary - either a neuron is firing down a synapse, or it's not. Given this, to believe that there is some fundamental barrier between what AI could accomplish and "intelligence" is to believe that intelligence arises from something other than physical processes.

Not exactly true. A neuron can fire at a certain speed, and due to the ways that axons and dendrites connect, a dendrite can get a "stronger" signal from certain neurons compared to others. Sometimes the combined firing of several neurons is required to activate the next neuron in the pathway.

That is to say, you can't start a chain reaction that'll create a hallucination of an apple by stimulating one single neuron.

Actually, Jane Q. Public has a point, though she doesn't have it all right.

There has been a big debate in the philosophy of mind on the notion of "strong AI". The idea of strong AI is that of an artificial intelligence which truly understands things. There is a difference between that and mimicry.

Here are two problems for strong AI, though it only begins to scratch the surface:

1. All modern AI comes about from a binary system of calculation. However, any binary system of calculation can be realized using any physical system. For example, instead of electricity, water could flow through pipes. If a program uses binary calculations, it could theoretically run on a field of rocks arranged to indicate 1s and 0s. So, if we are to admit that a software program can have strong AI, we have to admit that a field of rocks could be self-aware to the same extent.

2. I cite Chalmers in his book "a conscious mind":"Searle has argued that implementation is not an objective matter, but instead is "observer-relative": any system can be seen to implement any computation if interpreted appropriately. Searle holds, for example, that his wall can be seen to implement the Wordstar word processing program."(Full disclosure: Chalmers then goes on to try to refute Searle's thesis)

These difficulties aside, I disagree with Jane in that I think computers *are* becoming more intelligent, insofar as a toaster can be more intelligent than a grill (it has a system of memory and timing, for example). But there is the problem of consciousness.

I was excited to see this article, but I was disappointed to find that the author teases the reader with certain AI possibilities without even mentioning the philosophical debate that puts it in question. I don't mean that the article has to be exhaustively and indigestibly philosophical, but it could have alluded to the difficulties if it's going to make philosophical claims about the connection between human and computer intelligence.

Here's the thing, though - the brain is a realized physical system of networked charge paths and potentials. The basic functions are, in fact, binary - either a neuron is firing down a synapse, or it's not. Given this, to believe that there is some fundamental barrier between what AI could accomplish and "intelligence" is to believe that intelligence arises from something other than physical processes.

Been a long time since neurobiology, but if I remember correctly, the neuron does not simply either fire or not. The strength of the signal is also transmitted via how many neurotransmitters are released. Not only are chemical neurotransmitters either released or not, but the amount released also passes information.

I agree with Jane Q. Maybe it is because I am old enough to remember "2001: A space Odyssey" when it came out. We have been promised "intelligent" computers for a long time and nothing like an "intelligent" computer exists. A machine that can beat a chess master but not recognize the difference between a shoe and a sock is hardly "intelligent".

Given this, to believe that there is some fundamental barrier between what AI could accomplish and "intelligence" is to believe that intelligence arises from something other than physical processes.

So are you interpreting robotic intelligence based on the empirical inputs of experienced roboticists, or on your philosophical axiom that intelligence must not and can not possibly arise from something other than physical processes?

The types of problems needed to be overcome for improved robotics can be illustrated by looking at two games that computers have tackled -- backgammon and chess.

Chess playing programs have mostly followed the pattern of using some basic rules (controlling the center is good, protecting the king is good, having more pieces is good) to evaluate a position. While this might give a very rudimentary evaluation, it's enhanced by looking ahead more and more. For a particular move, find my opponents move that minimizes my position value, now look at my best move, then opponents, etc. Relatively strong computer programs were built with fairly simple processing power using this pattern. (Of course this is an oversimplification -- there are position databases, rules for how deep to explore a particular path, etc.)

Backgammon programs initially used this approach but the results were not very good. One of the challenges was in developing the rules. Backgammon is basically a race, but sometimes having another piece hit and sent back is good, and being farther behind is better is some types of positions (in prime vs. prime -- it's referred to as "timing"). Eventually programmers developed neural nets to play backgammon. The programs would play up to millions of games against themselves, see what worked, and use feedback to adjust the neural net. These programs eventually became as good as the very best humans. (I haven't been following that field much of late, but I believe they are now better, even though the best humans have been learning by studying the neural nets.)

One interesting difference is that the the rules-based approach can answer questions about why one position is better than another, e.g. "control of the center is worth more than the pawn, so the sacrifice is worth it." The neural net can't answer such questions -- it just spits out a value for a position.

Different problems need to be solved in different ways. The robots that search google images and try to find a matching real life object more like the backgammon/pattern-matching approach. Their answer to why something is a stapler is that it seems to match all the other examples of staplers. The example in the article comparing a person to a cactus is more of the rules-based approach. ("I think it's a person because the probability of a face outweighs the probability of it being green and having thorns.")

Figuring out which way to solve different robotic problems is going to be one of the keys. Rules might get one close quickly, but it might be impossible to get good enough. On the other hand one could train a neural net for a long time and have something that still doesn't work.

"A robot doesn't directly "see" the true state of reality. Instead it must infer the state from noisy sensor measurements."

What tickles me about this is that we humans are exactly the same way. The difference is that we have more highly optimized sensors and a lifetime's experience in interpreting their readings.

That's exactly what I was going to say. The assumption is that what we "see" is somehow some kind of objective representation of reality. Read the book by Oliver Sachs "The Man Who Mistook His Wife For A Hat: And Other Clinical Tales" and you'll "see" things differently.

While I'm at it, there is the old quote:

Quote:

"If the brain were so simple we could understand it, we would be so simple we couldn't." -- Lyall Watson

Here's the thing, though - the brain is a realized physical system of networked charge paths and potentials. The basic functions are, in fact, binary - either a neuron is firing down a synapse, or it's not. Given this, to believe that there is some fundamental barrier between what AI could accomplish and "intelligence" is to believe that intelligence arises from something other than physical processes.

Not exactly true. A neuron can fire at a certain speed, and due to the ways that axons and dendrites connect, a dendrite can get a "stronger" signal from certain neurons compared to others. Sometimes the combined firing of several neurons is required to activate the next neuron in the pathway.

That is to say, you can't start a chain reaction that'll create a hallucination of an apple by stimulating one single neuron.

A friend of mine is developing software at the Salk Institute that simulates the interaction of brain neurons and drugs. He's shown me some 3D animations of how neurons exchange different chemicals and it is vastly more than a binary process. It's really fascinating stuff.

@Control GroupYou make two claims: one, that the brain is a binary system and, two, therefore to deny strong AI is to deny that consciousness is a physical process.

I'll admit I'm no specialist on the brain, but I highly doubt that there is anything approaching a convincing argument that the brain is a binary system. In order to understand the mode of calculation (binary or something else) we need two things. First, we need detailed information of each physical state of the brain (neurons firing down a synapse, as you say). Next, we would need detailed information of each mental state that arises. Then we can compare the physical changes with changes in, let's say, "conscious thought". Otherwise, how can we know from where consciousness arises? We are no where close to getting detailed data on either of those things (we can measure "conscious thought" insofar as we can ask questions to another human being and then look at their brains, but this is very low resolution), so I wonder how we could figure out that the brain is a binary system?

Moreover, besides this theoretical problem, there is in fact a lot of evidence that the brain is not binary. This point has been made above. Even if someone were to make this radical claim, I imagine it would be, like most things about the mind, highly controversial.

Been a long time since neurobiology, but if I remember correctly, the neuron does not simply either fire or not. The strength of the signal is also transmitted via how many neurotransmitters are released. Not only are chemical neurotransmitters either released or not, but the amount released also passes information.

Well, I was trying to keep it simple. To an arbitrary degree of precision - i.e., to some precision beyond that which the synapse can carry or the recipient detect - the number of different discharge levels can be modeled in binary.

Quote:

I agree with Jane Q. Maybe it is because I am old enough to remember "2001: A space Odyssey" when it came out. We have been promised "intelligent" computers for a long time and nothing like an "intelligent" computer exists. A machine that can beat a chess master but not recognize the difference between a shoe and a sock is hardly "intelligent".

And I assure you, I'm not claiming that we have intelligent computers now. I just don't think there's any reasonable "bright line" for intelligence - rather, like ants to birds to cats to dolphins to humans, there's a spectrum of intelligence which we keep pushing AI further along.

shaunld wrote:

So are you interpreting robotic intelligence based on the empirical inputs of experienced roboticists, or on your philosophical axiom that intelligence must not and can not possibly arise from something other than physical processes?

I'm basing it on having no evidence whatsoever that non-physical processes - AKA "magic" - exist. If one wants to posit some mechanism outside the realm of physics, then I think the burden of proof is on the claimant to demonstrate such a mechanism exists.

I'm having a problem understanding your use of "the cave" as an example of state estimation. "The cave" is about emotional states "fear and enlightenment" robots have neither. You could say: Robotic sensing is like one part of Plato's allegory of "the cave". Although that would infer that it is impossible for a robot to know what is really going on around it and would actively ignore information that contradicted the previous state estimation to preserve it's ego.

@Control GroupYou make two claims: one, that the brain is a binary system and, two, therefore to deny strong AI is to deny that consciousness is a physical process.

My claim that the brain is a physically realized system does not depend on any statements about it being binary, though I apologize for not making that clear. I argue that the brain is representable in binary, but that isn't a prerequisite for it being entirely a physical process.

The larger claim is that assuming consciousness is a phenomenon which arises entirely out of physical processes, then there is no fundamental bar to replicating consciousness "artificially."

Quote:

I'll admit I'm no specialist on the brain, but I highly doubt that there is anything approaching a convincing argument that the brain is a binary system. In order to understand the mode of calculation (binary or something else) we need two things.

Whether or not the brain is a binary system has little to do with understanding anything; rather, it is entirely dependent on the notion that the number of states the brain can occupy is finite. As I mentioned in a different reply, the number of states a neuron can occupy can be modeled to arbitrary precision in binary. If it has four charge levels, we need two bits. Eight, we need three, etc.

Quote:

First, we need detailed information of each physical state of the brain (neurons firing down a synapse, as you say). Next, we would need detailed information of each mental state that arises. Then we can compare the physical changes with changes in, let's say, "conscious thought". Otherwise, how can we know from where consciousness arises? We are no where close to getting detailed data on either of those things (we can measure "conscious thought" insofar as we can ask questions to another human being and then look at their brains, but this is very low resolution), so I wonder how we could figure out that the brain is a binary system?

Don't conflate the ability to know the brain can be modeled in binary with the ability to actually model it. We're nowhere close to having - much less implementing - a model of the human brain. And if intelligence/consciousness is an emergent property of the complexity of the brain (that is, that something on the order of the human brain's complexity is a prerequisite for intelligence), then we're nowhere close to creating it.

But that is not the same thing as saying that there's a fundamental barrier which will prevent us ever from creating it; my claim is that no such barrier exists.

Quote:

Moreover, besides this theoretical problem, there is in fact a lot of evidence that the brain is not binary. This point has been made above. Even if someone were to make this radical claim, I imagine it would be, like most things about the mind, highly controversial.

If there is a lot of evidence to this effect, I would be interested to see it. My instinct is to think that any finite system can theoretically be modeled in binary to the limits physics allows measurement, but I certainly make no claims to being an expert in any related field. I am always reserve the right to be wrong.

Good article but with some misconceptions as pointed out already. The use of the word "think" in the title is obviously not justified as has been pointed out.

I'd like to add one thing: Someone mentioned something along the lines of "now we only need faster computers". The article implies this too when it says, the brain also has feature detectors for object features (the zones in the visual cortex that light up for specific features). That is an oversimplification. No one today knows how the brain recognizes objects (meaning: creates distinct unified percepts). This is called "the binding problem". AND IT IS UNRESOLVED. Doesn't matter how fast your computer is that calculates the probabilities for the robot. No robot today performs object recognition the way the human brain does.

Side-note:Also, somebody said that the strength of the signal of a neuron ddepends on the quantity of neurotransmitters. That's wrong. A spike of a neuron is all or nothing. Either you hit threshold or not. The resulting spike is the basic unit of information. Whether you spike or not can depend on the quantity of neurotransmitters, but there is no "strength of the spike".