Robot 'Thinks' Like a Honey Bee

Roboticists already have developed a robot that runs like a cheetah and one that moves like an earthworm. Now engineers in the UK are trying to create artificial intelligence that will power a flying robot that can think like a honey bee.

The Green Brain project, out of the Universities of Sheffield and Sussex, is working on the production of the first computer models of the brain of a honey bee, particularly the systems that control vision and sense of smell, according to researchers.

Using this artificial intelligence, scientists plan to develop a flying robot that can act autonomously as a bee would, rather than be programmed to carry out movements and actions. The robot will be used in further research to help solve the tricky problem of creating a true artificial brain that can actually think for itself, rather than perform tasks via programmed software, according to scientists.

While researchers have tried to do this by studying brains of more complex organisms, such as rats and monkeys, Project Green Brain's scientists think focusing on simpler yet social creatures like honey bees could garner more success, according to the project’s leader, Dr. James Marshall of the University of Sheffield. "The development of an artificial brain is one of the greatest challenges in artificial intelligence,” Marshall said in a press release. “Because the honey bee brain is smaller and more accessible than any vertebrate brain, we hope to eventually be able to produce an accurate and complete model that we can test within a flying robot.”

Once this artificial intelligence is developed, scientists expect to accelerate the development of new advancements in autonomous flying robots, said Dr. Thomas Nowotny, the leader of the University of Sussex team, in the press release. The project also could have future applications in brain modeling and computational neuroscience. Additionally, researchers will learn more about honey bees and perhaps glean information about their decline in population in recent years.

The UK Engineering and Physical Sciences Research Council (EPSRC) funded the project, which also is using high-performance processors donated by NVIDIA. The processors -- used to generate 3D graphics on PCs and game consoles, as well as power supercomputers -- will allow scientists to forgo using expensive supercomputers to run their calculation-heavy models and use a less expensive and cumbersome standard PC instead, researchers said.

Thanks for clarifying. I still think there are some unwarranted, anthropomorphizing assumptions in the comments here about how much independence a machine can actually have. OTOH, the lack of predictability is precisely why other researchers are working on not only autonomous robots, but two-way communications methods with same, as we covered here: http://www.designnews.com/author.asp?section_id=1386&doc_id=251721

I am a degreed engineer and a programmer. I have worked in Artificial Intelligence. I don't anthropomorphize. If my comments sound that way it is because I don't have the language to express the real concern briefly. Put simply, a system that is not completely predictable is out of control. You cannot prove that a system that is out of control is safe. Neural networks and similar computing systems are very good at certain tasks, but when they get more than so complex, their behavior can no longer be predicted. The technical term is Chaotic. It also applies to fluid flow, and is why we can't predict the weather.

Now, imagine trying to dealing with an industrial robot capable of throwing a car across the room that becomes unpredictable when the situation becomes unfamiliar to it. Don't get me wrong. It is not that the robot is likely to actually throw something, just the fact that something that powerful might do something completely unexpected is dangerous.

Little pilot things like this project aren't dangerous. If they ever come out of simulation, it will be with little toy systems. They are a great study platform, and we will learn a lot from them. My concern is that we have a tendency to push these things into realms where they can cause problems before we learn to keep them safe.

It is not exactly the same thing, but the recent uncontrolled acceleration thing is an example of what I am talking about. The programs and designs were not proven safe, and people got hurt.

SparkyWatt, I wasn't referring to your comments, since you had just clarified them, but to those of others. I've noticed an anthropomorphizing tendency in comments on other blogs we've done about robot autonomy. But thanks for detailing more of what you're concerned about. I share those concerns, and so do many of our commenters.

SparkyWatt, Would it be possible to address your (extremely valid) concerns by creating a conventional "box" around the AI system. Not sure if that is the correct term but I'm thinking along the lines of the AI system providing the main control within limits set by a conventional system. In reality, this is equivalent to what is done today with a "real" intelligent system (i.e., human controller). The human could get distractacted and due something stupid, but there tends to be a conventional safety system to prevent that. Use the same idea with AI instead of the human.

@Jack Rupert: That could be done to a point. In systems where the safety issues are fairly simple, putting a "box" around the AI system could be very effective. For example an AI controlled industrial robot could still be stopped by a conventional safety curtain. However in more complex systems, there is no conventional system that could provide that protection. An example of what I am talking about would be an AI controlled car.

I think it would be far better to learn how to control "the way they think". This would be a significant extension of feedback/control theory that would ensure that they stayed on task and within safe boundaries. It is a much deeper learning curve for us to get there, but that way we could design systems that would not be inclined to try something stupid.

@SparkyWatt, I see what you mean with the autonomous car example where the safety "box" wouldn't work. Is your idea to have a more specific / complex level of conventional control, or is to attempt to figure out how AI works at a much lower level and prevent "bad" decisions in that manner?

It won't be too much longer and hardware design, as we used to know it, will be remembered alongside the slide rule and the Karnaugh map. You will need to move beyond those familiar bits and bytes into the new world of software centric design.

People who want to take advantage of solar energy in their homes no longer need to install a bolt-on solar-panel system atop their houses -- they can integrate solar-energy-harvesting shingles directing into an existing or new roof instead.

Kaspersky Labs indicated at its February meeting that cyber attacks are far more sophisticated than previous thought. It turns out even air-gapping (disconnecting computers from the Internet to protect against cyber intrusion) isn’t a foolproof way to avoid getting hacked. And Kaspersky implied the NSA is the smartest attacker.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.