Adapteva's $100 Parallella Supercomputer Platform Now Shipping

A new $100 supercomputer will facilitate a wealth of compute-intensive applications for medical, automotive, and industrial control, machine vision... the list goes on!

Way back in the mists of time we used to call 2010, I was introduced to a guy called Andreas Olofsson. As I wrote in my column From RTL to GDSII in Just Six Weeks!, Andreas had left his job, formed a company called Adapteva, and -- working in his basement and living off his pension fund -- single-handedly invented a new, ultra-low-power computer architecture.

Andreas went on to design his own System-on-Chip (SoC) from the ground up. In fact, he took the first version of this device -- called the Epiphany -- all the way to working silicon and a packaged prototype.

About two years later, in October 2012, Andreas contacted me to say that he and his colleagues had launched a Kickstarter campaign with the mission to create a Personal supercomputer for only $100!. By this time, there were two versions of the Epiphany -- the Epiphany-III and the Epiphany-IV. Both of these devices contain an array of processor cores, each of which is equipped with its own local memory and a single-precision floating-point engine.

The Epiphany-III (implemented at the 65nm node) boasts an array of 16 processors, while the Epiphany-IV (implemented at the 28nm node) features an array of 64 processors.

Everything on the Epiphany is designed to offer optimum performance while consuming as little power as possible. For example, when operating at peak performance, the Epiphany-IV provides 100 Gflops of raw computing power while consuming only 2W. This means that, at 50Gflops/Watt, the Epiphany-IV is 50 to 100X more efficient than anything else out there.

Adapteva's supercomputer platform, which is called the Parallella, is based on the combination of an Epiphany multi-core processor with a Zynq All Programmable SoC from Xilinx. In fact, there are going to be two versions of this little beauty -- one based on an Epiphany E16 (16 cores), and one equipped with an Epiphany E64 (64 cores). Even when running flat out, a Parallella equipped with an Epiphany E64 will consume as little as 5W!

The credit-card-sized Parallella supercomputer is based on the combination of a Zynq All Programmable SoC from Xilinx and an Epiphany multi-core processor from Adapteva.

Four-board Parallella stack with Ethernet and power connectors.

Laptop connected to a 42-board, 756-CPU Parallella cluster, which consumes less than 500W!

For the past few weeks over on All Programmable Planet, people have been asking "Does anyone have any news about the Parallella?" I must admit that I've been pretty excited to hear what's going on myself, because I made a $99 pledge on the Kickstarter project, and since then, I've been eagerly looking forward to seeing my Parallella "in the flesh" as it were.

I know that initial prototypes were shipped to major backers back in December of 2012, but since that time, everything seemed to fall strangely quiet. Well, I just heard that the folks at Adapteva have started shipping early “Beta” boards to Kickstarter backers of the ROLF, 64-CORE-PLUS, and DEVELOPER support levels.

My understanding is that the folks at Adapteva are still planning on making a few more "tweaks" and refinements, after which they will ship the remaining 6,300 Parallella’s ordered via Kickstarter -- all of these boards should have shipped by the end of the summer.

Ships with free, open-source Epiphany development tools that include C compiler, multicore debugger, Eclipse IDE, OpenCL SDK/compiler, and run time libraries.

Dimensions are 3.4” x 2.1”

But wait, there's more, because Adapteva is now taking pre-orders for the 16-core Parallella platform from the general public. Parallella boards will be available in different build configurations with a starting price of $99, and these "general availability" orders will ship later this fall (click here for more details).

So, now I'm awaiting the arrival of my very own personal supercomputer (the only problem is that I am not a patient man). How about you, did you sign up on the Kickstarter campaign and order one of these little beauties? If so, what do you plan to do with it when it arrives?

Chess algorithms are usually tuned to the targeted hardware and use various 'tricks' (like using 64-bit board representations that can be managed and operated on easily by 64-bit procesors). Having ranks of processors could allow new algorithms to emerge that might identify ways to use very massive processor banks for a variety of algorithms: physical systems that are currently difficult to model (EM fields, turbulent fluid flow, large molecule interactions, encryption/decryption, etc.)

On practical advantage is a better understanding of how to break algorithms into pieces that can be efficiently executed on a large number of processors. Chess computers uses algorithms that are common to other difficult problems (economics, scheduling, FPGA routing, etc) so this could help identify some new algorithmic approaches to solving other problems.

While it's certainly interesting to speculate on the potential applications, I'm reminded of similar questions about the neverending increase in disk storage capacity. It seems like a case of "build it and they [applications] will come."

I'd like to see stacks of these implement a distributed chess playing algorithm to create the best chess playing computer in the world. It would crush every existing chess computer out ther. After we do that we would move on to predicting the weather...

Source synchronous LVDS -- that would be great for a robust, versitile interface. 8 data lanes is not too many pins compared to the much lower bandwidth parallel interfaces we have used to communicate with processors currently available on the market. Thanks for the information! I hope to have the opportunity to use the Parallella.