@SparkyWatt, I see what you mean with the autonomous car example where the safety "box" wouldn't work. Is your idea to have a more specific / complex level of conventional control, or is to attempt to figure out how AI works at a much lower level and prevent "bad" decisions in that manner?

@Jack Rupert: That could be done to a point. In systems where the safety issues are fairly simple, putting a "box" around the AI system could be very effective. For example an AI controlled industrial robot could still be stopped by a conventional safety curtain. However in more complex systems, there is no conventional system that could provide that protection. An example of what I am talking about would be an AI controlled car.

I think it would be far better to learn how to control "the way they think". This would be a significant extension of feedback/control theory that would ensure that they stayed on task and within safe boundaries. It is a much deeper learning curve for us to get there, but that way we could design systems that would not be inclined to try something stupid.

SparkyWatt, Would it be possible to address your (extremely valid) concerns by creating a conventional "box" around the AI system. Not sure if that is the correct term but I'm thinking along the lines of the AI system providing the main control within limits set by a conventional system. In reality, this is equivalent to what is done today with a "real" intelligent system (i.e., human controller). The human could get distractacted and due something stupid, but there tends to be a conventional safety system to prevent that. Use the same idea with AI instead of the human.

SparkyWatt, I wasn't referring to your comments, since you had just clarified them, but to those of others. I've noticed an anthropomorphizing tendency in comments on other blogs we've done about robot autonomy. But thanks for detailing more of what you're concerned about. I share those concerns, and so do many of our commenters.

I am a degreed engineer and a programmer. I have worked in Artificial Intelligence. I don't anthropomorphize. If my comments sound that way it is because I don't have the language to express the real concern briefly. Put simply, a system that is not completely predictable is out of control. You cannot prove that a system that is out of control is safe. Neural networks and similar computing systems are very good at certain tasks, but when they get more than so complex, their behavior can no longer be predicted. The technical term is Chaotic. It also applies to fluid flow, and is why we can't predict the weather.

Now, imagine trying to dealing with an industrial robot capable of throwing a car across the room that becomes unpredictable when the situation becomes unfamiliar to it. Don't get me wrong. It is not that the robot is likely to actually throw something, just the fact that something that powerful might do something completely unexpected is dangerous.

Little pilot things like this project aren't dangerous. If they ever come out of simulation, it will be with little toy systems. They are a great study platform, and we will learn a lot from them. My concern is that we have a tendency to push these things into realms where they can cause problems before we learn to keep them safe.

It is not exactly the same thing, but the recent uncontrolled acceleration thing is an example of what I am talking about. The programs and designs were not proven safe, and people got hurt.

Thanks for clarifying. I still think there are some unwarranted, anthropomorphizing assumptions in the comments here about how much independence a machine can actually have. OTOH, the lack of predictability is precisely why other researchers are working on not only autonomous robots, but two-way communications methods with same, as we covered here: http://www.designnews.com/author.asp?section_id=1386&doc_id=251721

When I said that it would have its own desires, I didn't mean that it would necessarily have a sese of self and be acting in its own interests. I meant only that its choices may not be fully predictable. It therefore may be unreliable, or worse make choices that are dangerous to us. We have to remember, it may not do that out of a sense of self or in its own self interest. It may be simply be its unawareness that makes it dangerous. After all, most injuries caused by machines are not caused because the machine "tries" to hurt us or because the machine is trying to protect us, it is happening because the machine is "unaware". Brain like systems will be more flexible, but they add the element of unpredictability to that.

I agree that bees are a great model for AI. But I find it odd that some are assuming an artificial brain could have desires. First, there has to be somebody home, and there's no evidence that an artificial intelligence or an artificial brain (not the same thing) would have enough of a sense of self and individuality to have desires. Of course, this has been the subject of much debate in philosophy over the centuries, as well as more recently in robotics and sci-fi.

Brains are a lot more complex than we think, and a tiny one like a honey bee might actually be tractible for analysis. Remember the human brain has around 100 billion neurons, with something like 10,000 interconnections per neuron. Something around one millionth that size might actually be technically doable.

On the other hand, the neurons are only the high speed processing capability of the brain. Hormones and neurotransmitter availability have a "bank switching" function and a training function that strongly affects the way the neurons think. They just operate at a slower speed (over a period of seconds instead of milliseconds). To say that a brain can be modeled without that added complexity is kidding ourselves. And we know so little of that chemically based functionality at this time. Really, we only know that it is there. We have little or no idea how it works or what it does, in an algorithmic sense.

The thing that concerns me is that an artificial brain will have its own ideas an desires. Before we can trust them, we not only have to learn how to duplicate them, we have to learn how to control them. By which I mean we have to learn how to keep them from making mistakes (we are very good at making mistakes) and how to make sure they are acting in our interests instead of competeing with us.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.