8/03/2009 @ 6:00AM

Is Multiprocessing Possible?

Uncle Sam has a new role as the sugar daddy of multiprocessing application development.

The National Science Foundation will spend $809 million this year on research in computer science, thanks in part to a $235 million bump from the federal stimulus package. Of that, an undisclosed portion will be spent on researching tools, programming methodologies and software development in the multicore world.

While the numbers are fuzzy, this still represents a significant shift in how this kind of research is funded. In the past almost all research was funded by computer hardware vendors, and they pretty much focused on the easier stuff. Databases, search, graphics rendering and scientific applications all can be broken into discrete processing chunks and then reassembled into a cohesive whole. The hard work is figuring out how to take advantage of many cores for other kinds of applications–or whether it’s possible at all.

What started us down this path goes to the heart of computer chip design. In simple terms, the closer the wires and transistors on a chip are, as defined by Moore’s Law, the more heat they generate. The only way around that is to reduce the speed of the individual processor cores and add more cores onto the chip. But if an application only takes advantage of one of those cores, it probably won’t run much faster than a previous version of the machine. It may even run slower.

So why upgrade? The reasons are far less clear than in the past, which is why companies like
Intel
have been investing in all sorts of interesting things to keep selling more hardware. Intel has been a strong backer of virtualization technology from companies like
VMware
and Citrix that allow multiple applications to run on a single computer’s multiple cores. And in future designs, Intel is building acceleration technology into adjacent cores with its Turbo Boost so that when extra power is needed it can draw on the capabilities of more than one core.

Intel also has created its own multicore programming language, called Ct, which is still in the experimental phase, and
IBM
has been working on its own multicore processing technology. Computer science departments around the globe have been dabbling in this, as well. But so far, government funding of this kind of research has been spotty. In fact, pure research into multicore programming with the understanding that it may or may not yield results has been sparse since
AT&T’s
Bell Labs and
Xerox’s
Palo Alto Research Center closed down.

At that time, however, multiprocessing was considered only if there was a clear benefit to splitting up an application across multiple processors or multiple machines. Now it’s a requirement for all applications. Even Intel’s Atom chip is on track for multiple cores in future releases, but so far there isn’t much software to take advantage of it.

Whether the federal funding can break the logjam and solve this problem is unknown. Programmers think serially, and multi-core and many-core chips require them to program in parallel. But at least now some money is being spent to know whether it’s at least possible, and that’s a huge step forward.