Posted
by
samzenpus
on Monday October 01, 2012 @05:14PM
from the what-could-possibly-go-wrong? dept.

An anonymous reader writes "British researchers at the Universities of Sussex and Sheffield are developing a computer model of a bee's brain that they hope can help scientists better understand the brains of more-complex animals, such as humans, and perhaps power artificial intelligence systems for bee-like robots. Called 'Green Brain,' the project is trying to advance the science of AI beyond systems that just follow a predetermined set of rules, and into an area where AI systems can actually act autonomously and respond to sensory signals."

...every good project has to start somewhere - and it will be interesting to see what this kind of AI modeling will accomplish. Perhaps we can learn more about bees, and how to keep them doing their busy work throughout our world without mass murdering them....that being said... the day they crack the secrets of modelling the human female's brain... there is where the real money will be made.

"British researchers at the Universities of Sussex and Sheffield are developing a computer model of a bee’s brain that they hope can help scientists better understand the brains of more-complex animals, such as humans, and perhaps power artificial intelligence systems for bee-like robots."

Perhaps bee-like robots. Or robots that function as bees do, where they perform mundane functions over and over for the good of society.

These don't have to be limited to just RoboBees. The algorithm could be used for more than just pollination. Think about it. Build anything of the appropriate size to autonomously go out and collect $RESOURCE, return with a load, refuel itself and go back out. Some cursory self-defense, like hazard evasion, would be nice. Throw in some networked communication to help with discovery of sources and you have a very efficient way to accumulate stuff.

Logistically, if the swarm could not manufacture new units, and or, collect and repair damaged/errant units, the system has serious vulnerabilities.

take for instance, the human greed factor.

If there is a huge swarm of autonomous robots out scouring riverbed sandbars for teensy gold nuggets, or some other discrete but scattered and valuable resource, how long do you think it would be before unscrupulous people tried to trick the bees into dropping the cargo off at a "new" dropoff point?

Where there is profit, there will always be dirty dealing and crime. Look at the internet for instance, with something seemingly as harmless as email. Then along came the spammer.

Autocollecting robot swarms would be a smorgasboard for whitecollar criminals.

The algorithm could be used for more than just pollination. Think about it. Build anything of the appropriate size to autonomously go out and collect $RESOURCE, return with a load, refuel itself and go back out. Some cursory self-defense, like hazard evasion, would be nice. Throw in some networked communication to help with discovery of sources and you have a very efficient way to accumulate stuff.

If they're going to include these behaviors, they ought to model Weaver ants instead.

may possibly be the approach many of these very smart researchers use. Perhaps the focus should be on developing some kind of artifical nevous system with the abitlity to learn on its own rather than trying to program for the dynamics of real world interaction. Perhaps the folks over at Boston Dynamics [bostondynamics.com] may be on to something? Not sure what its learning/memory capabilites are but it sure seems to behave like it has some kind of nervous system.

It looks like they are actually trying to simulate a bee's brain at the neuron level (the article is light on details). This is the latest trend in strong AI, to simulate a brain at the neuron level. There are a lot of problems and difficulties involved, and inevitably the emulations are only crude approximations of real neurons that the researches hope is good enough. But if it works, the brain should have some learning capability.

Apparently brain-emulation technology is to the point where emulating an entire bee brain in real time is feasible

It's not. They are emulating a simplification of a bee's brain. Everyone else doing brain modeling is simplifying things as well. Do the simplifications make a difference? This is a question no one actually knows.

Why go through the trouble of building an actual physical bee, when there are awesome 3d world and physics models that you could drop the bee brain into and it would have no idea the world was simulated. Seems like that would bee a lot easier debug. *cringe*

Natural system software incorporates rules that have builtin support for uncertainty.

Ever wondered why it takes so goddamn much processing power to fold a model of a protein? (They don't call brownian motion a "random walk" for no reason you know.)

The nervous system of that bee is fudementally influenced by biochemical interactions at thousands of locations, each incorporating a degree of randomness into the system. Instead of treating the randomness as noise, the design utilizes the randomness as an asset.