Posted
by
kdawson
on Sunday August 08, 2010 @04:14AM
from the but-not-as-we-know-it dept.

Calopteryx notes a New Scientist piece on how digital organisms in a computer world called Avida replicate, mutate, and have evolved a rudimentary form of memory. Another example of evolution in a simulation lab is provided by reader Csiko: "An evolutionary algorithm was used to derive a control strategy for simulated robot soccer players. The results are interesting — after a few hundred generations, the robots learn to defend, pass, and score — amazing considering that there was no trainer in the system; the self-organizing differentiated behavior of the players emerged solely out of the evolutionary process."

Not really, it's merely selecting patterns it is not aware of if it's patterns are "successful" or not. If you run a pattern generator long enough you can get all possible patterns within a finite possibility space.

I don't get why this has been modded "funny". It's true. Just like monkeys tapping away at keyboards in order to generate the works of Shakespeare, a computer can generate player algorithm patterns that work well in this particular setting. The speed is just boosted by selectively choosing the ones that match whatever it is you want to get at the end.

The fun thing is that these robots truly have a one-track mind. They do not learn -at all- within one generation, even if they have a brain that is relatively similar to ours. The brain is configured -entirely- at "birth" by the natural selection algorithm.

And yet they display a few remarkably human traits, that seem to -but don't- indicate learning. Memory. Strategy. Having a strategy responding to the "enemy". Yet by most standards -they don't think during the game. This makes one wonder... is the fact that humans have memory, adapt "somewhat", devise strategy really an indication of the level of thought we think humans have ?

Makes one wonder just how one-track the human mind is. Everyone likes to always accuse everyone else of "not seeing the truth" about very nontrivial problems. Are people really "seeing the truth" or just repeating what they were programmed ?

History of science definitely seems to agree with the "programmed" argument. Other histories... even more. We are mindless automatons, we just like to think we aren't.

Depends on what level of perspective you want to look at. If you look at simple tasks and abilities, yes, a human will learn and think (some more than others) over the course of his life. It is evident if you take for example twins that grow in different environments, they get to have different abilities and understanding of the world.

OTOH if you widen your view and look at how humans interact between each other (i.e. society), how they think (technology, culture), and other things like that they don't really learn anything during their life. That's where evolution kicks in, people born in different generations have different ways of interacting and thinking. Some are behind their times while others are ahead which I see as a normal mutation, if you will, that can be a succesful one or a failing one. But even revolutionary people become conservatives after a certain age. That's why people die, that's how society evolves.

Yes, it's not all black and white like I made it sound, some things in the first category are inate and some in the secondary can still be modified by experience but I think my point was properly made.

Speaking of twins, there are actually a lot of discoveries where twins get separated, lived and grew in a completely different environment but end up having the same traits such as habits, ways of thinking, etc.

Interesting point. Perhaps the 'direction' comes from the sociological benefits and/or penalties for a given physical or mental trait, which could be driven internally by the system in a feedback loop.

Lets take the most basic and easily understood example today - having a child.

Does society fiscally reward or fiscally punish having a child? Both, actually, depending on where you are on the socioeconomic scale.If you are a young and extremely poor single woman, society fiscally rewards having a child. A w

I think this is a place where people get distracted when it comes to evolution.Evolution IS directed. It's directed by Natural Selection. The direction it moves in is dictated by the environment and the species fitness to it. Because environments can change, directions also change.

This is a very important distinction because some Intelligent Design proponents (including their top "actual" biologist Michael Behe) believe in Evolution, but not by Natural Selection. They believe in Evolution by Intelligent

I don't get why this has been modded "funny". It's true. Just like monkeys tapping away at keyboards in order to generate the works of Shakespeare, a computer can generate player algorithm patterns that work well in this particular setting. The speed is just boosted by selectively choosing the ones that match whatever it is you want to get at the end.

And this teeny little boost is the difference between getting what you want before or after the monkeys and the computer disappear from proton decay [wikipedia.org].

Not really, it's merely selecting
patterns it is not aware of if
it's patterns are "successful"
or not.

Author ascribes awareness to the selection process.
Grammatical ambiguity related to uses of "it's" some people
may also consider to be funny--change the first "it's" to "its" for comical effects.

If you run a pattern generator
long enough you can get all possible
patterns within a finite possibility space.

Now that's just plain LOL. Even if we assume that
the pattern space is finite, which is not clear at
all, given that we're dealing here with velocities,
possibly even classical chaos, the dimension of the space
must be humongous, thus evolutive algoritms.

The problem space is so vast when you get into the necessary details humans take for granted:Its so vast that it makes secure passwords look simplistic - this is far beyond brute forcing AES encryption. Even a simplified problem space is usually quite large in terms of possible combinations the only advantage AI work has is that there are no singular solutions but a large fuzzy set of solutions that are reasonably acceptable.

Say a monkey typed 99% of Shakespeare but it was wrong only for 1% of it: next attempt being random, the monkey would likely have 0% Shakespeare! There would be no convergence towards the answer. Even bruteforcing encryption rules out past attempts to avoid repeating itself but a random search does not. Furthermore, say the problem space is random - so then a 99% Shakespeare is light years away from the 100% Shakespeare, then no matter what the process for convergence (ie evolution) it is not going to converge which effectively puts you into the same situation as a random search.

The monkey typing thing is a silly way to state the obvious and sound good while doing so. "Its POSSIBLE but impractically time consuming" doesn't sound as good. These AI problems are nothing like monkey's typing - they learn and progress towards competency which is totally different! Again, they do this quite quickly since anything near the monkey approach wouldn't get there in our lifetimes (winning the lotto is more likely.)

Just because it is mindbogglingly complex does not mean it is intelligent...or that it has something we'd normally think of as a "memory" either. Its possible our brains are just pattern matching machines - and since we can only understand the most simple of such things we'll never figure it out (but could build a brain which could figure it out eventually and perhaps our brains are just an extremely fuzzy non-linear pattern match for #42.)

In evolution what is important is selection, as long as there is selection (based on fitness) and variability the system will adapt to the environment (the things that shape fitness). So there is a trainer, it is called selection.

"In evolution what is important is selection, as long as there is selection (based on fitness) and variability the system will adapt to the environment (the things that shape fitness). So there is a trainer, it is called selection."

Not exactly. Unless variability is driven, selection and variability *may* press the system to fit the environment. But there's no security: the system may be destroyed as well.

That's extraordinarily unlikely. Granted, if you're only looking at a single individual, mutations/breeding may cause catastrophic changes.But on a population-wide basis, sudden overall declines in 'best individual' fitness are pretty much impossible."

On a population-wide basis, system breakage is not only likely but it is the norm: it's called species extinction.

That's really because of a change in the environment and therefore a change in the fitness of the organism (or population of organisms) for that environment.

If a nuclear war broke out tomorrow, the human race would no longer be fit for the environment. Instead, the cockroach would be significantly more fit and therefore selected for (or cockroach-like features).

You do have to be a bit careful, though--- sometimes there is a hidden trainer in the system. In evolutionary algorithms, there are often a lot of parameters and data structures to tweak at the beginning, e.g., what kinds of crossover and mutation operators do you have, and what's your bit-string encoding? There are a whole lot of ways to slip in human domain knowledge of which things are important into the up-front engineering.

Statistical ML is one of my areas of research, so I'm fairly familiar with the basics.;-)

Generally mashing two individuals together is only better than hill-climbing if you have a useful bit-string encoding and crossover operator that results in the mashing operating on nice units. The vast majority of published successful GA results I've seen have quite heroically engineered encodings that include a lot of human domain knowledge. If you just take some random data structure and serialize it to bits directl

Saying there wasn't a trainer in the system is a bit of a misunderstanding really.

Evolutionary algorithms always makes use of a fitness function to define which generations are to survive and evolve and which are to die off, this is the case in the presented setup as well. Without knowing the project i'd guess they let the "teams" play against each other and let the winners survive.

If there wasn't a fitness function it wouldn't really be an evolutionary algorithm, evolution sorta implies "survival of the fittest" and all that you know:) The interesting part is observing the emergent behavior, in other words what we were not expecting to get out of the system. When the system doesn't have any knowledge of what a "defender" is, or what "passing the ball" means, it's interesting to see these well-known patterns evolve even when they are not specified, this is what matters to the AI researcher.

Other implementations of evolutionary algorithms may be fun (http://rogeralsing.com/2008/12/07/genetic-programming-evolution-of-mona-lisa/) but are not showing emergent behavior because you are asking for a specific output through the fitness algorithm. That is the main difference.

Well.. if you read their pdf (linked at the bottom of the blog post), you see that they literally turn the Fitness Function into a trainer that leads teams to the most proper ways to play (see pg5 Section "4.5 Fitness evaluation").

The first step in the learning process is that the teams should spread out on the field and be relatively evenly distributed.

Not really, it's merely selecting patterns it is not aware of if it's patterns are "successful" or not. If you run a pattern generator long enough you can get all possible patterns within a finite possibility space.

You've missed the point of course, not all information (or patterns) need to be generated, only a finite subset will ever needs to be found to be useful, to put it another way you can make a million variations of a fork, but it's still useful as a fork.

If you read the pdf that the blog post links to ("Evolving neural network controllers for a team of self-organizing robots"), on Page 5 in section "4.5 Fitness Function"... they discuss how they decompose fitness down from simply "Score the most goals" to little component tasks that "ensure a smooth learning process assuming some preliminary knowledge or ideas about the solution."

They practically lead the algorithms to better solutions along certain pat

Someone mod parent up. This reminds me of the automated mathematician: if it's given rules that encourage discovering the Goldbach conjecture, and you spend enough time tuning it, then it's no surprise that it will eventually discover the Goldbach conjecture.
Some debate whether AM actually discovered anything, or just found the stuff it was designed to discover (seeing as it stopped finding interesting conjectures after rediscovering all the known ones). But that's getting into philosophy.

You don't have to go into philosophy to get these. Theoretical mathematics will help you out here.

Suppose you had a "perfect" learner. One that tries every theoretically possible analytical technique. And then it manages to surprise you : it discovers existing mathematics, and perhaps a bit more, but nothing truly remarkable. That would simply be the result of a mathematical property of the "mathematical space" (the set of all possible mathematical knowledge, of, say all Godel-sentences) : that would simply

The subset of correct mathematical theory cannot be the empty set, because if it were empty, set theory (which clearly is part of mathematics) would be flawed, and therefore there wouldn't be a well defined notion of empty set, making the statement "the subset of correct mathematical theory is the empty set" meaningless. On the other hand, logic also is part of mathematics, and therefore my argument may not hold in that case.

Life in general is not much different. The environment/nature/the universe sets rules that encourage the creation of lifeforms, encourages them to replicate and improve their chance of survival. It's no surprise that life evolves and creatures develop memories, intelligence etc. The whole system is setup in a way that it is bound to happen.Whether evolution in nature or evolution on a computer, the underlying principles at work are similar.

This program was not designed to discover passing, defending and scoring. It was designed to win at soccer. The program on its own realized that passing, defending and scoring are good strategies for winning at soccer. The rules of the simulation do encourage this behavior, but they were not designed to - the fact that the rules of the simulation create this result is a perfectly valid discovery, even though it's a discovery that humans made thousand of years ago.

A bit more than 15 years ago I saw a documentary on Discovery Channel featuring identical work being made by a brittish scientist / computer programmer. His software spawned simple "lifeforms" made up by basic 2D and 3D geometrical objects - cubes, cylinders, flat triangles etc., - that were then trying to evolve methods of how to most efficiently move and travel in the simulated environment they were put in - sometimes an airy environment with ground underneath them, and gravity, and sometimes an "ocean" in which the "lifeforms" swam. Minute after minute the "lifeforms" jiggered and bounced around like broken machinery, but slowly developing a method for moving and navigating that was the most efficient for their particular shape. He spawned caterpillar-like animals made up from chains of cubes, that slowly learned how to wriggle and crawl just like catterpillars and snakes do. He spawned randomized "freaks" that learned that sometimes managed to learn how to walk with their disfiguring, and sometimes learning that the only way was to throw some bodypart around to pull themselves forward. He spawned biped animals that slowly learned how to jump to move forward, an triped animals that learned how to skip from one leg to the other, to the third. He spawned lifeforms in a watery environment that learned how to rhythmically oscillate their bodyparts to create propulsion in order to swim forward and turn around. To me, this was just as impressive, if not more, than the featured story. As a curious detail to it all, the programmer developed his software in BlitzBasic, running on a heavily accelerated Amiga 1200.

I read the article wanting to know how the Avida developed memory. Basically, the programmer included an instruction that said "Do what you did last time" It is not evolution if the programmer hands them the ability. Also, when the goal stays in the same location every time, your robots can develop "memory" through the program itself. Ex: To go 2 up & 3 left -> Forward, Forward, Turn Left, Forward, Forward, Forward. No intelligence in the search pattern. This is simply memorizing the location of the goal. I would not call this memory.

I am very interested in this subject and get excited every time Slashdot posts a new story in this topic, but I never see any real advances vs. what I was doing in school 20 years ago. This doesn't mean advances aren't being made, but I think they are now at the level where they don't make simple easy-read stories. Real robots (not simulated ones) getting form point A to B (not just wanting to go from A to B) over rough terrain without help (mars rovers) is much more complicated and a required advance to put this technology into a real application. MIT, NASA, National Labs always seem to have interesting projects going on.

We celebrate these simple outdated advances in AI when we have hundreds of programs out there now capable of playing World of Warcraft without help simply to collect virtual gold to sell for cash.

Another reason I hate these articles is that they don't include any real specifics. You could learn more reading Wikipeida on GA, GP, ANN... It was a video of a Koza project that got me really interested in this topic. Why don't people include something like this in the article. A couple of years ago, I decided to rewrite one of my old projects so that people could easily run it online - Ant Simulator [lalena.com]. Watching the system quickly learn or solve a problem is much more satisfying than reading an article written by someone that doesn't actually understand the field.

Memory for Genetic Programming was an interesting topic back in 1995 too... And the first Koza book was an inspiration.

One way to test out 'memory' in an experimental was is to give the individuals some 'memory cells' (or internal preserved state) to work with, and then A/B test some of the good individuals vs. the same individuals with noise added to the memory cells. In that way, one can get a handle on whether/how they're really making use of the memory. Just like adding junk code into a buggy progra

If you've seen any of the truly massive demos done in Conway's game of life you will rapidly see that actually modeling a physical mechanism for memory based on simple principles is going to take a metric assload of computing time. Actually, I think the actual value is somewhere between an assload and a fuckton. At this point it seems more like a useful separate experiment.

In the late 1980s, ecologist Thomas Ray, who is now at the University of Oklahoma in Norman, got wind of Core Wars and saw its potential for studying evolution. He built Tierra, a computerised world populated by self-replicating programs that could make errors as they reproduced.

When the cloned programs filled the memory space available to them, they began overwriting existing copies. Then things changed. The original program was 80 lines long, but after some time Ray saw a 79-line program appear, then a 78-line one. Gradually, to fit more copies in, the programs trimmed their own code, one line at a time. Then one emerged that was 45 lines long. It had eliminated its copy instruction, and replaced it with a shorter piece of code that allowed it to hijack the copying code of a longer program. Digital evolvers had arrived, and a virus was born.

Avida is Tierra's rightful successor. Its environment can be made far more complex, it allows for more flexibility and more analysis, and - crucially - its organisms can't use each other's code. That makes them more life-like than the inhabitants of Tierra.

Actually, organisms using each others code sounds way more like our world than ones that can't leech off each other. They already pointed out viruses, and plenty of species exist today that need other species to continue to survive.. in fact pretty much all animals need to eat other lifeforms because we can't draw energy from the sun directly.

In the late 1980s, ecologist Thomas Ray, who is now at the University of Oklahoma in Norman, got wind of Core Wars and saw its potential for studying evolution. He built Tierra, a computerised world populated by self-replicating programs that could make errors as they reproduced.

I was so amazed by the results claimed for Tierra that I went and reimplemented it myself [homeunix.net]. And damned if I didn't get similar results [homeunix.net]. At the time, it blew me away that such a system could come up with novel solutions I hadn't expected or 'programmed in'. Indeed, a couple times it took me a while to even figure out how the things worked.

I'm not sure where the claim about "can't use each other's code" comes from. Perhaps a subtle misunderstanding. While Avida does keep each virtual machine fully isolated from the others, Avida _does_ have explicit support for parasitic behaviors, in the form of code injection into neighboring organisms.

Organisms can perfectly draw energy directly from the sun, and animals and humans still do (such as vitamin D production).

As a physician I find your statement ludicrous. While there is a photochemical step in the synthesis of vitamin D it's hardly fair calling a double bond being split by a photon as "drawing energy" from the sun. For that matter you could say that the dimerization of thymine in DNA by sunlight (which produces the genetic damage observed when a person is exposed to UV radiation) is another way we "draw energy" from the sun.

Humans do not produce ATP from sunlight. Period.

And I would agree with OP - all organisms, including plants, are directly dependent on other organisms. Without nitrogen fixing bacteria to fix nitrogen for the plants, and without decomposing bacteria to release minerals again into the soil, even plants would not exist. While the organisms that are set up to harvest sunlight directly from photosynthesis are the biggest input into the food chain, they can't live without the rest of it, especially the lowly decomposers. We're now all totally dependent on one another.

While there is a photochemical step in the synthesis of vitamin D it's hardly fair calling a double bond being split by a photon as "drawing energy" from the sun.

Is the step endothermic or not? If it is endothermic, then the reaction is indeed drawing energy (no "scare quotes" needed) from the Sun. Given that UV radiation (which is the highest energy photons coming from solar radiation) is apparently needed as part of the process, this indicates to me that the critical step is highly endothermic.

It's a reversible reaction. Sunlight favors the forward reaction and that's all. It does happen on its own. And do you know, Vitamin D is good for you and there are pathologies involved in Vitamin D deficiency, but you could probably survive your entire life in a cave out of sunlight. It's not easy to die from rickets - though there certainly would be quality of life issues.

Given that UV radiation (which is the highest energy photons coming from solar radiation) is apparently needed as part of the process, this indicates to me that the critical step is highly endothermic.

It doesn't indicate that. The reason the reaction needs light is that it is thermally forbidden, as the molecular orbitals doesn't line up in the right way (they are out of phase). The reason UV-light is needed is that the absorbance band of the molecule is in the UV. The opposite reaction also requires light (though probably of a longer wavelength, as it has 3 conjugated double bonds in stead of 2), and only one of the reactions can be endothermic.

Organisms can perfectly draw energy directly from the sun, and animals and humans still do (such as vitamine D production).

That's total bullshit, and shows incredible ignorance about how the body works. Not one joule of energy an animal's body uses comes from the sun. We use the sun in vitamin D production because a proton can split the molecule, and it's cheaper to let the sun do it than to do it ourselves.

So the sun does save a little energy, but it does not, in any way, shape, or form, produce energy for animals to use. The only way animals get energy from the

Perhaps if you were a little more polite, people would be less inclined to point out your mistakes.

The first and last things people say are the bits that are more likely to be noticed, it's called the "Primacy Effect" and the "Recency Effect". You started off by incorrectly saying Vitamin D production is "drawing energy directly from the sun". This is what set the tone of your post, and it is incorrect, hence why people mentally discard the rest of what you said.

Me too, it was just trying to give helpful advice that may actually result in you being able to maintain healthy relationships with other humans. Try reading your last few posts before being so impolite to others. Perhaps you just like to argue.

Okay, how about we try a more objective approach since you can't judge this sensibly for yourself.. 1/3rd of your comments have been moderated down, often as "flamebait". I clicked on one of them and it starts with "shut up".

Yeah, it's clearly me that "likes to argue" here *rolleyes* I'm not going to waste any more time trying to help you understand why people think you're a jerk.

The robots need to become spoiled, overpaid millionaires, who refuse to train (France). Brag a lot (England) that their opponent is a bunch of "boys" (Germany), who are afraid of them. Then take a 4-1 shellacking from the "boys." And despite being the defending champions, and having a world class league in their country, bow out early. Because all of the players in their first class league are from South America (Italy), and the they have no good domestic players.

I'm always confused if these discoveries are supposed to show that we'll someday have sentient robots that will rule the world a la every sci-fi for the past decade or if they are trying to model biological evolution in a meaningful way. Personally, I hope the sentient robot thing is NP-complete.:P

For modeling biological evolution, any in silico organism model needs to incorporate the fact that most mutations are "nearly neutral" (some might say slightly deleterious) with respect to the scoring algorit

If evolution is the work of Gods, and we can refrain from wiping ourselves out in the next few generations, then we shall be as Gods... And if you follow the mythologies of old, we'll probably be just as stupid and make as many silly jealous mistakes as those very "human" Gods from back then...

Could be. Everything with religion can be mixed with science. Science is just there to try to understand what's there that is there.

If God does the selecting then you'd be talking about a devine plan (who gets to mate with who) and even that can be explaned with the Many Worlds Interpretation. Just as heaven and hell can be explained with multiple universes.

I do not believe in inteligent design, but I will also not deny other peoples believes, because who am I to claim the right to believe in what I believe

Actually if you ask the top scientists, most of them will say they believe in God.It's the most politically correct answer, but in their minds they are thinking 'no, dumbass, and quit asking'.

When I was young I went to Sunday School religiously. I wanted to believe, and I wanted to see the path.After years of that, one day in Sunday School I picked up the one book it all centered around (the Bible) and asked the teacher if it was true.He said

"So I said 'Where's the dinosaurs?'"I don't believe in God, like I already said. Especialy not in any one of the religions out there.Better yet, if he would exist then I am going to seriously kick him very, very hard in the balls after I get passed that gate of heaven and if I won't get in I will break in. I'm that pissed of about certain things.

But thanks for the effort of blocking my effort to open the door to science for fierce legilious people.

Do they? This page [nytimes.com] says that 40% of scientists surveyed believed in god (way less than the populace at large) and only 10% of "elite scientists" believe in god. I wouldn't consider 10% of scientists to be "most" of them.

Basically, if you have a bunch of random individuals, and the 'evolution' just mashes a bunch of the better ones together, you'll see the increase in fitness occurring. But it's not just a small effect : almost any crazy 'mashing together' method works, and the adaptation will spark off unbelievably quickly.

I know this because I did this for my PhD back in 1995. I had a choice then between going the Neural Net path, and playing around with the Genetic Algorithm/Genetic Programming stuff. Simple experiments proved that making NNs 'do the right thing' was a fairly tricky process of getting things set up right (and your formulae had to be right, etc : a fairly sensitive procedure). But the Genetic stuff was amazingly robust : almost any crazy method of crushing individuals together will produce remarkable innovation and learning (on a population basis).

But don't take my word for it, write a small piece of code yourself. The literature makes it sound like a more exact science that it needs to be. As I said, almost any 'mashup' method will work - the 'evolution thing' will simply find a way to 'protect' the important stuff.

So while this looks like 'old news' in some ways, I'm glad that they've got an eye-opening application : More people should know how much the computer guys can add to the biological evolution debate.

Basically, if you have a bunch of random individuals, and the 'evolution' just mashes a bunch of the better ones together, you'll see the increase in fitness occurring.

Funny, I did my Masters degree in 2003 on GAs and the major finding is that, while the system finds novel solutions, it also exploits the weaknesses of the fitness function very easily. In other words, if you wanted to get a particular result, you had to put most of your effort into the fitness function that describes the desired result. This doesn't work without said function, ergo, design is the key, not simply mashing things together. In fact, if you run a GA without a fitness function, you get a rand

The whole point is that this wasn't designed. Only the initial conditions were designed, and the designers then let them go on their own without any knowledge or comprehension as to how they would progress. The bullshit that is known as "Intelligent Design" is based on the assumption that the end results are too complex to have arisen on their own. In this case, the ability for the soccer players to know how to pass and such would be seen as "too complex" to have arisen by trial and error from a basic initial condition, but this is exactly what happened.

This was not intelligent design. It is, in fact, the very definition of evolution. It's also worth noting that evolution says absolutely nothing about the initial conditions for life, only how it progressed since it began.

It's imho an illogical assumption that the universe needed an intelligent creator, but that the intelligent creator didn't need one himself

I agree, but that's a second step in the argument. If they first demonstrate that intelligence can emerge from a non-intelligent system, then it's an obvious corollary that intelligence wasn't necessary to create the non-intelligent system in the first place.

Have you considered how unlikely it is to have a creator that fits us so well?

The abrahamic deity fits the people even better than we are fit for this planet. People live in deserts and on the poles, but the abrahamic god is interestinly only concerned with things that are near his people and the local culture. When he wants people to go somewhere or conquest some country it's never around the globe or on the poles. He's deeply concerned with things that concern the population at that time and place. He ref

This is the problem with AI research. The notion that we can bootstrap life by mimicking evolution is crazy. Computers are getting faster, but billions of years multiplied by billions of neurons per organism they can't do" Nobody gives a shit what their sex robot "emerged solely out of," they just want to fuck it.

Of course. Which is why we are evolving algorithms to do things we find desirable (in this case, playing robot soccer) using components that already implement the desireable trains. Much like we ha