To make parallel computing ubiquitous, developers need access to a platform that is affordable, open, and easy to use.

They promise the latter three, but "access" seems a bit lacking. Also they specifically left out performance but talk it up in separate marketing materials (5 watts for 45 GFLOPs etc)

Some other alternatives optimizing for local maxima in the solution set:

Just simulate in software, if you don't care about speed but want to learn to program parallel. Erlang? They seem to have a fixation on C, why not use the right tool?

Go to opencores.org and stick a zillion cores on a off the shelf FPGA dev board. Or a fat stack of picoblaze or microblaze if you're willing to deal with the annoying licensing hassles (my advice, stick with opencores to avoid legal hassles, the weird licensing for the *blaze family is like the creepy dude in a van offering kids "free" candy)

They seem spread a bit thin based on clicking around the website. They're doing everything but invent hard AI and the warp drive on their website, which is a lot for just 4 people. Their kickstarter seems pretty firmly grounded in comparison.

One of those "infinite spare time" play toys would be to stick a bunch of 6809 cores (or pdp-8s or -11s or Z80s or whatever) on one of my FPGA boards and figure out the glue logic. Anyone with a big enough board could download by VHDL/Verilog and go for it on their own hardware.

the devboard has a Dual-core ARM A9, so more like a pandaboard. even if you ignore the co-processor they are offering a lot for $99.

its interesting to compare the epiphany processor to a GPU. both give you lots of cores, GPUs get up ino the hundreds, epiphany is meant to scale to 4000. But a GPU is highly opitmised for graphics, and applying identical operations to millions of data values. in a GPU groups of core (typically 32) operate as a wavefront, if the code branches on an if stament, then the cores that get the else branch have to wait until the ones that follow the if finish.

epiphany has independant cores. you can send them each a different program. so for a much wider set of algorithms you can get efficient speedups. in a way it is more like the xeon phi, but without making each core a full x86 compatible processor.

100nm process?... Well, if you had read the information provided you would know that the 16-core version from the kickstarter is done in a 65nm process and the 64-core version is done in the 28nm process in cooperation with Globalfoundries.

And for the GPUs: yes, i know that a modern GPU (or even a core i7) is more powerful. But, I unfortunately I cannot plug a modern GPU into my mobile robot/drone/quadrocopter in order to do things like real-time vision processing/neural networks/machine learning/AI. The epiphany consumes something between 2-5 Watts (in words: TWO watts for 64-cores). I am currently not aware of anything coming close to the performance of the parallella for the mobile vision processing applications mentioned above.

PS: I know that the raspberry pi has quite a powerful GPU. But its GPU is locked down by NDAs and NOT accessible for OpenCL oder GPGPU.

Yes, that's true. But unfortunately i cannot plug your Radeon or GTX into my mobile robot or quadrocopter in order to give them machine vision or neural networks/machine learning "brains" (at least not with some serious improvements in battery technology!).

So, what are the alternatives to bring the current vision algorithms to mobile devices/robots? The Parallella is the only option I am aware of.

For these types of mobile applications, you should rather compare the Parallella with Raspberry Pi or Arduino. And guess who wins this performance comparison!;)