SiCortex: High Performance Computing Without the High Electric Bills

Share

The Assabet River these days rushes through Maynard, MA, without lending any of its liquid muscle to local industry. But for more than a century, the river supplied power to the Assabet Woolen Mill, a vast brick complex that, in its heyday, was the largest source of wool for U.S. military uniforms. I went to the mill two weeks ago to visit computer maker SiCortex, which is just one of numerous high-tech startups, including Monster.com and 38 Studios, that have taken over the complex, now known Clock Tower Place. And when I saw how swiftly the Assabet flows past the old mill buildings, I was reminded that for some companies—including, increasingly, computing companies—rivers are still a prime source of power. Google, for example, spends so much money on electricity that the search giant decided to build its newest data centers near hydroelectric dams in Washington state, where electricity is cheaper.

As it turns out, SiCortex’s whole mission is to help organizations do lots of computing without having to worry so much about energy costs. The company makes massively parallel computers that contain thousands of individual processors, wired together in a way that lets them exchange data very quickly—so quickly that the processors themselves don’t have to be very fast in order for the machine as a whole to carry out trillions of operations per second. And because the processors in SiCortex’s machines run at a relatively pokey 700 Megahertz, they don’t consume nearly as much power (or give off as much waste heat) as the multi-Gigahertz processors hawked by the Intels of the world.

If you take power and cooling expenses into account, according to SiCortex, its machines are only one-third as costly to own and operate as equally fast Intel-based clusters. In fact, a SiCortex machine uses so little electricity that it can be powered by a small team of cyclists. The company organized just such a stunt at MIT last December, when 10 members of the MIT cyclocross team hooked stationary bikes up to generators and pumped out enough juice to run a fusion simulation. Of course, “That’s not a great way to power your computer system,” admits Matt Reilly, SiCortex’s co-founder and chief engineer. “The first thing we found out was that you have to cool the people pedaling the bikes. A really good bicyclist can sustain something like 300 watts, but normally they’re moving through the air while they do that. These guys were sweating like pigs.”

Reilly and co-founders Jud Leonard (now CTO) and John Mucci (a board member and the longtime CEO) came up with the basic idea for SiCortex’s fast but energy-efficient hardware back in 2002. The time needed to finish a computation, Reilly explained to me, is usually determined by three factors: the time required to do arithmetic in the CPU, the time required to move data around in memory, and the time required for input/output operations (that is, getting data into and out of the CPU). For parallel computers—which most of today’s high-performance computers are—there’s also a fourth factor: the communications time, or the time needed to move data between processors.

Semiconductor manufacturers have done an amazing job of speeding up both CPUs and memory chips over the last three decades (but at a high energy cost, as already mentioned). I/O operations are a still a bottleneck, though a variety of tricks exist for speeding them up. But Reilly, Leonard, and Mucci—all veterans of the famed Boston minicomputer company Digital Equipment Corporation—noted that nobody was really working on the fourth problem: reducing the travel time between processors in parallel machines. “That created an opportunity for a very small company to do very large things,” says Reilly.

In a machine with thousands of processors, you can’t simply string an Ethernet cable from each processor to every neighbor that it might need to communicate with. (Imagine how many phone lines would be coming out of your house if you needed a dedicated line to connect with every home or office you might want to dial.) To keep the number of wires manageable, a parallel machine’s “backplane” or communications mesh has to take the form of a … Next Page »