Race Is on to Advance Software for Chips

Wednesday

Apr 30, 2008 at 5:03 AM

Three rival teams of computer researchers are working on new types of software needed to better use computer chips that can process many tasks at the same time.

PALO ALTO, Calif. — In the computer world’s equivalent of “The Amazing Race,” three rival teams of computer researchers are working on new types of software needed to better use computer chips that can process many tasks at the same time.

Stanford University and six computer and chip makers plan to announce Friday the creation of the Pervasive Parallelism Lab. Besides Stanford, the backers are Sun Microsystems, Advanced Micro Devices, Nvidia, I.B.M., Hewlett-Packard and Intel.

Last month, Intel and Microsoft announced they were jointly financing new labs at the University of California, Berkeley and the University of Illinois at Urbana-Champaign to tackle the same problem.

All three efforts are in response to a growing awareness that the software industry is not ready for the coming availability of microprocessors with 8 or 16 or more cores, or processing units, on a single chip. Computer and chip makers are concerned that if software cannot use the new hardware efficiently, customers will have little reason to upgrade.

The Stanford lab, which will cost $6 million over three years, will be led by Kunle Olukotun, a professor of electrical engineering and computer science. Mr. Olukotun helped pioneer the idea of multicore microprocessors, which have since gained rapid popularity in both corporate and consumer computer hardware.

The most advanced corporate server microprocessor, as well as processors for video game machines, have up to eight cores. While today’s operating systems — the basic layer of software that runs a computer — can work with this type of hardware, software engineers widely acknowledge that most applications, ranging from corporate productivity software to multimedia programs, are not designed for efficient use of the dozens or hundreds of processors in future computers.

The separate university efforts will share some approaches, but will also try different experiments, including new programming languages and hardware innovations.

They will also rethink operating systems and compilers, the specialized software that translates raw programming instructions into something that computers can understand.

The Berkeley researchers have broken parallel computing problems into seven classes, each of them to be attacked using a different approach.

In contrast, the Stanford researchers said they were looking for new ways to hide the complexity of parallel computing from programmers and will use virtual world and robotic vehicles to test their ideas.

Beginning in 2004, Intel acknowledged that it had hit what was essentially a heat barrier in designing ever-faster microprocessors and aggressively shifted to multicore designs.

Now there is a rush to develop tools for mainstream programmers who have spent their entire careers designing software for sequential, not parallel, programming systems, said John L. Hennessy, Stanford’s president and a professor of computer science.

Never miss a story

Choose the plan that's right for you.
Digital access or digital and print delivery.