Posted
by
Soulskill
on Friday June 13, 2014 @02:13PM
from the bring-me-the-severed-subroutine-of-your-fallen-foe dept.

An anonymous reader writes: A while back we discussedCode Combat, a multiplayer game that lets players program their way to victory. They recently launched a tournament called Greed, where coders had to write algorithms for competitively collecting coins. 545 programmers participated, submitting over 126,000 lines of code, which resulted in 390 billion statements being executed on a 673-core supercomputer. The winner, going by the name of "Wizard Dude," won 363 matches, tied 14, and lost none! He explains his strategy: "My coin-collecting algorithm uses a novel forces-based mechanism to control movement. Each coin on the map applies an attractive force on collectors (peasants/peons) proportional to its value over distance squared. Allied collectors and the arena edges apply a repulsive force, pushing other collectors away. The sum of these forces produces a vector indicating the direction in which the collector should move this turn. The result is that: 1) collectors naturally move towards clusters of coins that give the greatest overall payoff,
2) collectors spread out evenly to cover territory. Additionally, the value of each coin is scaled depending on its distance from the nearest enemy collector, weighting in favor of coins with an almost even distance. This encourages collectors not to chase lost coins, but to deprive the enemy of contested coins first and leave safer coins for later."

I thought Greed was "The Multimillion Dollar Challenge" where teams of five tried to answer trivia questions but each round one player was randomly paid to try to take another player out of the game, or be thrown out trying....

I thought Greed was getting kickbacks from the lobbying groups to buy your support for questionable bills.

What he has done is effectively apply game theory in deciding which coins to target and how to spread his resources. It is quite clever but applying this solution to this style of problem isn't really unexpected.

under 1 hour... so let's assume half an hour... that is still like $250 a day for a cluster like that could be built for under $10,000... break even is within 2 months of use including electricity, so really those prices are still pretty high, it's just that most people only need that kind of power for short bursts of time.

$10,000 barely gets you ONE modern well-equipped 20 core server system (I am thinking in particular of the Dell R820/R920 platforms) so no, while you could probably heap together 100 or so ARM cores for $10/core and get something to run on it, a supercomputer it is not.

I can get 8 core systems sub $1k. It depends on the type of hardware really which it doesn't specify; 20+ cores in a single machine has been available since at least the turn of the century they always cost an arm and a leg though because of the complexities of integrating that many CPUs in a single machine. A combination of boxes amounting to the same amount of CPU, RAM etc has always been cheaper but also larger and harder to use.

I can get 8 core systems sub $1k. It depends on the type of hardware really which it doesn't specify; 20+ cores in a single machine has been available since at least the turn of the century they always cost an arm and a leg though because of the complexities of integrating that many CPUs in a single machine. A combination of boxes amounting to the same amount of CPU, RAM etc has always been cheaper but also larger and harder to use.

The less you spend per core (by having them less concentrated) the more you will spend on interconnecting them in a way befitting a supercomputer (i.e. massive parallelism). A pile of machines totaling 600 cores on a gigabit switch is of very little use compared to a few mega-core machines on a better, smaller network. And you don't want to know how much all the fabric would cost to properly integrate all of those 8 core systems.

It really depends on your calculations (yes, I work in academic research). You can get very large, very parallel problems and have enough with 56k modems in between nodes and there are those where 12x Infiniband is not enough. It also depends on the person implementing the system, how well versed they are in the subject matter and cluster programming, the languages they use and whether or not what they write is aware of what is happening where.

The fabric can be relatively cheap actually, 24 port 10Gbps and QDR Infiniband switches can be had sub-5k these days (unless you go Cisco off course) especially in blade systems. All-in-all the hardware for clusters has gotten very, very cheap. Amazon wouldn't be selling it at $5/h if it weren't profitable.

Large research clusters BTW (such as the ones at Fermilab, CERN or your average University) are usually large sets of 2/4/8 core systems, sometimes with a few very large nodes thrown in or these days a set of GPU nodes. 20-core nodes are rare in actual clusters a la Blue Gene/Q

This is really interesting and exciting work. In 2010, we showed that nearly this exact algorithm is used by neonates (newborns) to govern their visual attention and eye movements, and it explains much of what we know about newborn visual attention. It's exciting to see that when you essentially parallelize the algorithm with multiple agents that are aware of each other, it becomes an extremely efficient algorithm for resource collection in a completely different field/task.
http://www.ncbi.nlm.nih.gov/pu... [nih.gov]

That's really cool! I find it really interesting and elegant to see the same simple model describe the behavior of such disparate systems that, on the surface, look complicated, but can be described by the sum of simple mechanisms.

I agree, the summary was really well written.

That's a good question, about using similar techniques for image processing and object segmentation from a scene. From a cognitive standpoint, neonates rapidly build on this simple model over their first few months of life as th

You need to click on the "Elsevier Open Access" link from NCBI, which is a direct link to the article on the publisher's website (this location is where you click for all PubMed articles, as long as the publisher has provided access in that way). PubMed never displays complete articles.

After clicking through, there's a "Download PDF" link at the top left of the article, just under the green Science Direct header.

I feel compelled to tell the world about a more confusing part of NCBI that I'm trying to navigate myself around at the moment: The Transcriptome Shotgun Assembly Sequence Database. Submitting sequences is... a little tricky. Here's a simplification of the process:

Create a BioSample record for the organism that you're submitting data for

The goal is that the attraction to the coins is greater when you're close (so you don't wander past one, pulled by that large cluster off to the side) and the repulsion is lesser when you're further away (so two allies can turn directly towards each other to pick up coins that lie between both, even though they repel each other).

Anyone else find it odd that he used a distance squared force for a 2D problem? The surface of a circle depends linearly on the radius.

Linearly being the key word... take it one step at a time (before looking at what geometry inverse square law could represent). The rule is derived entirely from distance... Distance reduces the number of spacial dimensions into one, it doesn't matter how many spacial dimensions you have so long as you can find a scalar distance between two points.

For a less abstract explanation think of a 2D simulation as a geometrical subset of a 3D simulation (that subset doesn't have to be axis aligned), a 2D simulation

It actually sounds like a "Schema Architecture" that Arkins proposed in 1998 http://mitpress.mit.edu/catego... [mit.edu]. You can implement it in about 10 lines of python because it's just that: the sum of attractive (goals) and repelling (obstacles) force vectors, weighted by the inverse of the distance squared. I was surprised OP didn't mention the Schema architecture, because it is exactly that, and since it sounds like a (simulated) robot game...

This is a very fun game! I've been looking for stuff like this. Normally I have fun writing stuff like this in games until they ban me for "Hacking" when really the hacking was the only fun part of me. Now the hacking bit IS the game.

You can tell these guys are all lamer coders, they can't document worth squat. In the forum some guy asks for clear docs and they repond in essense with "just run our simulator, it's too complicated to explain." What a bunch of hosers. A competition like this ought to have clearly deliniated parameters. From reading their page I can't tell a darn thing about what the "Greed" environment is, what the problem to be solved is, and the summary of the winning solution on the Slashdot article here presumes you already know exactly what the conditions and goal with which the warring program must run. I see references on the linked contest site to coins that "randomly appear" and not much more. There's no way he could submit his solution to a journal except the "Journal for Irreproducible Results." Lazy bastards. There may be an interesting solution to something here, but there's seems no way to tell exactly what without reverse engineering their simulator.