Adapteva $99 parallel processing boards targeted for summer

(Phys.org) —The semiconductor technology company Adapteva earlier this month featured its parallel-processing board for Linux supercomputingts at a major Linux event, and the board is targeted to ship this summer. The board will be going out to those who pledged money in last year's Adapteva Kickstarter campaign and to other customers. Not a minute too soon. To hear the story of computing as Adapteva tells it, the future of computing is parallel. Big-data and other demands pose a processor challenge and Adapteva recognizes a problem in energy efficiency that is calling for action. Adapteva is on a mission to "democratize" access to parallel computing.

The processor board running on Linux is called Parallella. According to the Kickstarter page, pledges totaled $898,921 from 4,965 backers when Adapteva set its goal for funding. The company decided to go through the crowdfunding route in order to produce the Parallella boards in volume. They sought funding for adequate tooling to accommodate volume, to make this board effort viable, to get the platform "out there."

The company's hurry-up drive on making parallel processing access easier for more people has a sense of urgency because the company wants to speed adoption of parallel processing in the industry. Founded in 2008, the company's chip technology has gained traction with government labs, corporate labs, and schools but getting large corporations to buy into parallel computing is challenging. They were convinced that the only way to create a sustainable parallel computing platform was through a grass roots movement. The company founder, Andreas Olofsson, said that parallel computing is the only way to scale to energy efficiency, performance, and cost. Systems, he stated, need to be parallel and they need to be open "Our 99 dollar kit is going to be completely open," he said, and the Parallella open platform will educate the masses on how to do parallel computing.

This video is not supported by your browser at this time.

"We don't have time to wait for the rest of the industry to come around to the fact that parallel computing is the only path forward and that we need to act now. We hope you will join us in our mission to change the way computers are built," they had said when appealing earlier for support.

The Lexington, Massachusetts, company has now announced they built the first Parallella board for Linux supercomputing. They made the announcement at the Linux Collaboration Summit in San Francisco earlier this month. (The summit is a gathering of core kernel developers, distribution maintainers, ISVs, end users, system vendors and various other community organizations.) The Linux distribution being used is Ubuntu 12.04

Adapteva's board is the size of a credit card. This comes with a dual-core ARM A9 processor and a 64-core Epiphany Multicore Accelerator chip. Parallela's details include 1GB of RAM, two USB 2.0 ports, a microSD slot, and an HDMI connection. Active components and the majority of the standard connectors are on the top side of the board. The expansion connectors and microSD card connector are at the bottom side of the board.

Olofsson said the company's first audience target is developers. "We need to make sure that every programmer has access to cheap and open parallel hardware and development tools," said an Adapteva program note for the Linux event. Massively parallel computing will become truly ubiquitous once the vast majority of programmers and programs know how to take full advantage of the underlying hardware They see a critical need to close the knowledge gap in parallel programming. They said their targeted our second tier are the people who just want an awesome computer for $99.

Related Stories

The rapid advances in information technology that drive many sectors of the U.S. economy could stall unless the nation aggressively pursues fundamental research and development of parallel computing -- hardware and software ...

(Phys.org) -- If exascale range is the next destination post in high-performance computing then Intel has a safe ticket to ride. Intel says its new Xeon Phi line of chips is an early stepping stone toward ...

High-performance computing in fields like the geosciences, molecular biology, and medical diagnostics enable discoveries that transform billions of lives every day. Universities, research institutions, and ...

NVIDIA Corporation today unveiled the Tesla 20-series of parallel processors for the high performance computing (HPC) market, based on its new generation CUDA processor architecture, codenamed "Fermi".

For Linux business users, the most important Linux release of 2007 so far is Red Hat Enterprise Linux 5. But for most other Linux fans, the upcoming release of Ubuntu Version 7.04 on April 19 demands more attention.

In the bid to come up with authentication solutions beyond passwords, fingerprint authentication from Qualcomm is making news, and so is Fujitsu's iris recognition, yet another potential authentication tech ...

Aside from a few "nits," a federal judge appeared poised on Monday to sign off on a $415 million settlement that would end a five-year legal battle over alleged illegal hiring practices in Silicon Valley.

A unanimous Supreme Court ruled Tuesday that federal courts can hear a dispute over Colorado's Internet tax law. One justice suggested it was time to reconsider the ban on state collection of sales taxes from companies outside ...

The Energy Department's National Renewable Energy Laboratory (NREL) and the Electric Power Research Institute (EPRI) have launched the Clean Energy Incubator Network. The program, funded by the Energy Department, aims to ...

We don't need more computing power, we need less abstraction. Every time processors get faster, people simply soak it all up by doing less development work. It's now at the point where your typical "bonafide developer", supposedly a working professional, can only do the equivalent of a child playing with legos. Ask him to make a shape that he doesn't have legos for, and he sits there looking at you with a blank stare.

Idiocracy here we come. Pretty soon all of the lego-builders will leave for Mars, or be slaughtered by hoards of pitch fork-wielding citizens wondering where their supply of Brawndo has gone, or something.

I should amend that last statement to say "We don't need more computing power nearly as much as we need less abstraction."

I make existing servers go thousands of times faster without doing anything to the hardware all the time in my line of work as an ultra-scale consultant and infrastructure provider, and it's worth noting that the problems are NEVER solved by throwing more hardware at them.

In highly concurrent systems the result of these "complex component assemblies" manifests itself as crippling bottlenecks at various layers of the architecture. What's especially irritating is that the people who develop the software have absolutely no concept of what's even going wrong, much less how they would go about solving it.

The upside, at least for me, is that a mere order of magnitude is a disappointing performance improvement by the time I'm done eliminating bottlenecks, because they are always just so terribly destructive compared with a simple problem of inefficient code, at least that scales with workers and probably only uses CPU. Most people that deal with high concurrency would be dancing in the streets if CPU was their bottleneck.

It's an interesting contrast with what you mention about optimizing code. That's obviously very important for things like massively scaled scientific models and whatnot. But in my line of work, the code efficiency takes back seat.

Things like database structure, staged data tables that are updated on an event-driven basis, indexes and matching queries, various caching layers like sphinx/memcached/xcache, doing EVERYTHING that modifies a database in an event-driven way generally, just trying to avoid bottlenecking the components that don't easily scale in general. And just doing and storing things in an informed way.

It's super easy to scale php/python/perl/ruby/tomcat/etc workers. I often find myself reducing the complexity of queries and shifting the array sorting and whatnot into PHP or whatever with great results, for example.

"We don't need more computing power nearly as much as we need less abstraction."

It's always a tradeoff.

Today you don't build purpose built oftware that will run on one hardware platform and have a closed set of requirements. You build with an eye towards extending the software in the future; adding functionality and cross platform capabilities. Programs are also so large nowadays that you program in teams - which means that you have to find common ground on style and capabilities, as multiple people need to be able to service multiple parts of the system.

Yes: purpose built one-shot systems can be more efficient. But whenever you want a new one you have to basically rebuild from scratch - and that is expensive. Not only in programming time, but in terms of certification, testing and (possibly) risk assessment.

That said: an eye for efficiency is never bad. But coding solely for efficiency is usually the first step to unsupportable code.

Once you have 50,000+ people using your stuff at once, you really have no choice but to abandon abstraction and ensure the transfer of institutional knowledge to minimize growing pains, at least in certain aspects of the architecture. As I mentioned, there are certain aspects of the architecture that you can shift work into which scale as easily as throwing in more servers, mounting up your gfs/zfs/nfs/etc share and adding them to the load balancer pool. In these cases abstraction is fine, as long as it's not dictating storage principals.

This is assuming of course that your application is even moderately driven by dynamic data.

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.

Javascript is currently disabled in your web browser. For full site functionality, it is necessary to enable Javascript.
In order to enable it, please see these instructions.