Robot 'Thinks' Like a Honey Bee

Roboticists already have developed a robot that runs like a cheetah and one that moves like an earthworm. Now engineers in the UK are trying to create artificial intelligence that will power a flying robot that can think like a honey bee.

The Green Brain project, out of the Universities of Sheffield and Sussex, is working on the production of the first computer models of the brain of a honey bee, particularly the systems that control vision and sense of smell, according to researchers.

Using this artificial intelligence, scientists plan to develop a flying robot that can act autonomously as a bee would, rather than be programmed to carry out movements and actions. The robot will be used in further research to help solve the tricky problem of creating a true artificial brain that can actually think for itself, rather than perform tasks via programmed software, according to scientists.

While researchers have tried to do this by studying brains of more complex organisms, such as rats and monkeys, Project Green Brain's scientists think focusing on simpler yet social creatures like honey bees could garner more success, according to the project’s leader, Dr. James Marshall of the University of Sheffield. "The development of an artificial brain is one of the greatest challenges in artificial intelligence,” Marshall said in a press release. “Because the honey bee brain is smaller and more accessible than any vertebrate brain, we hope to eventually be able to produce an accurate and complete model that we can test within a flying robot.”

Once this artificial intelligence is developed, scientists expect to accelerate the development of new advancements in autonomous flying robots, said Dr. Thomas Nowotny, the leader of the University of Sussex team, in the press release. The project also could have future applications in brain modeling and computational neuroscience. Additionally, researchers will learn more about honey bees and perhaps glean information about their decline in population in recent years.

The UK Engineering and Physical Sciences Research Council (EPSRC) funded the project, which also is using high-performance processors donated by NVIDIA. The processors -- used to generate 3D graphics on PCs and game consoles, as well as power supercomputers -- will allow scientists to forgo using expensive supercomputers to run their calculation-heavy models and use a less expensive and cumbersome standard PC instead, researchers said.

@SparkyWatt, I see what you mean with the autonomous car example where the safety "box" wouldn't work. Is your idea to have a more specific / complex level of conventional control, or is to attempt to figure out how AI works at a much lower level and prevent "bad" decisions in that manner?

@Jack Rupert: That could be done to a point. In systems where the safety issues are fairly simple, putting a "box" around the AI system could be very effective. For example an AI controlled industrial robot could still be stopped by a conventional safety curtain. However in more complex systems, there is no conventional system that could provide that protection. An example of what I am talking about would be an AI controlled car.

I think it would be far better to learn how to control "the way they think". This would be a significant extension of feedback/control theory that would ensure that they stayed on task and within safe boundaries. It is a much deeper learning curve for us to get there, but that way we could design systems that would not be inclined to try something stupid.

SparkyWatt, Would it be possible to address your (extremely valid) concerns by creating a conventional "box" around the AI system. Not sure if that is the correct term but I'm thinking along the lines of the AI system providing the main control within limits set by a conventional system. In reality, this is equivalent to what is done today with a "real" intelligent system (i.e., human controller). The human could get distractacted and due something stupid, but there tends to be a conventional safety system to prevent that. Use the same idea with AI instead of the human.

SparkyWatt, I wasn't referring to your comments, since you had just clarified them, but to those of others. I've noticed an anthropomorphizing tendency in comments on other blogs we've done about robot autonomy. But thanks for detailing more of what you're concerned about. I share those concerns, and so do many of our commenters.

I am a degreed engineer and a programmer. I have worked in Artificial Intelligence. I don't anthropomorphize. If my comments sound that way it is because I don't have the language to express the real concern briefly. Put simply, a system that is not completely predictable is out of control. You cannot prove that a system that is out of control is safe. Neural networks and similar computing systems are very good at certain tasks, but when they get more than so complex, their behavior can no longer be predicted. The technical term is Chaotic. It also applies to fluid flow, and is why we can't predict the weather.

Now, imagine trying to dealing with an industrial robot capable of throwing a car across the room that becomes unpredictable when the situation becomes unfamiliar to it. Don't get me wrong. It is not that the robot is likely to actually throw something, just the fact that something that powerful might do something completely unexpected is dangerous.

Little pilot things like this project aren't dangerous. If they ever come out of simulation, it will be with little toy systems. They are a great study platform, and we will learn a lot from them. My concern is that we have a tendency to push these things into realms where they can cause problems before we learn to keep them safe.

It is not exactly the same thing, but the recent uncontrolled acceleration thing is an example of what I am talking about. The programs and designs were not proven safe, and people got hurt.

Thanks for clarifying. I still think there are some unwarranted, anthropomorphizing assumptions in the comments here about how much independence a machine can actually have. OTOH, the lack of predictability is precisely why other researchers are working on not only autonomous robots, but two-way communications methods with same, as we covered here: http://www.designnews.com/author.asp?section_id=1386&doc_id=251721

When I said that it would have its own desires, I didn't mean that it would necessarily have a sese of self and be acting in its own interests. I meant only that its choices may not be fully predictable. It therefore may be unreliable, or worse make choices that are dangerous to us. We have to remember, it may not do that out of a sense of self or in its own self interest. It may be simply be its unawareness that makes it dangerous. After all, most injuries caused by machines are not caused because the machine "tries" to hurt us or because the machine is trying to protect us, it is happening because the machine is "unaware". Brain like systems will be more flexible, but they add the element of unpredictability to that.

I agree that bees are a great model for AI. But I find it odd that some are assuming an artificial brain could have desires. First, there has to be somebody home, and there's no evidence that an artificial intelligence or an artificial brain (not the same thing) would have enough of a sense of self and individuality to have desires. Of course, this has been the subject of much debate in philosophy over the centuries, as well as more recently in robotics and sci-fi.

Brains are a lot more complex than we think, and a tiny one like a honey bee might actually be tractible for analysis. Remember the human brain has around 100 billion neurons, with something like 10,000 interconnections per neuron. Something around one millionth that size might actually be technically doable.

On the other hand, the neurons are only the high speed processing capability of the brain. Hormones and neurotransmitter availability have a "bank switching" function and a training function that strongly affects the way the neurons think. They just operate at a slower speed (over a period of seconds instead of milliseconds). To say that a brain can be modeled without that added complexity is kidding ourselves. And we know so little of that chemically based functionality at this time. Really, we only know that it is there. We have little or no idea how it works or what it does, in an algorithmic sense.

The thing that concerns me is that an artificial brain will have its own ideas an desires. Before we can trust them, we not only have to learn how to duplicate them, we have to learn how to control them. By which I mean we have to learn how to keep them from making mistakes (we are very good at making mistakes) and how to make sure they are acting in our interests instead of competeing with us.

It won't be too much longer and hardware design, as we used to know it, will be remembered alongside the slide rule and the Karnaugh map. You will need to move beyond those familiar bits and bytes into the new world of software centric design.

People who want to take advantage of solar energy in their homes no longer need to install a bolt-on solar-panel system atop their houses -- they can integrate solar-energy-harvesting shingles directing into an existing or new roof instead.

Kaspersky Labs indicated at its February meeting that cyber attacks are far more sophisticated than previous thought. It turns out even air-gapping (disconnecting computers from the Internet to protect against cyber intrusion) isn’t a foolproof way to avoid getting hacked. And Kaspersky implied the NSA is the smartest attacker.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.