Uncategorized —

Intel introduces new quad-core chips

For better or for worse, today's official launch of the Core 2 Quad Q6600 puts …

For better or for worse, today's official launch of the Core 2 Quad Q6600 puts us well into the quad-core era. Not even Hennessy and Patterson, much less the coders at the world's largest software company, can think of enough ways to keep all four cores busy, but here we are nonetheless.

The launch of the Core 2 Quad Q6600 marks the formal introduction of Intel's first non-server, non-enthusiast quad-core part (OMG NEW TOWERS?). At $851, the 2.4GHz Q6600 now sits atop Intel's mainstream desktop line, so those who just have to go quad-core with their next upgrade will have a little more to think about. Gamers and the vast majority of users who won't be using the additional two cores that much would be advised to stick with the 2.4GHz Core 2 Duo E6600, which will give similar performance at less than half the price of the Q6600.

Intel also introduced two new quad-core server parts: the Xeon 3220 (2.4GHz at $851) and the Xeon X3210 (2.13GHz at $690). Both of these processors sport 8MB (2x4MB) L2 caches.

Speaking of Hennessy and Patterson and the multicore revolution, the ACM Queue interview that I stealthily linked above is eminently worth a read. The two authors manage to convey a sense of excitement about where computer architefture is right now, without being overly optimistic about the very fundamental challenges that field faces. Check out the follwing exchange:

DP Architecture is interesting again. From my perspective, parallelism is the biggest challenge since high-level programming languages. It's the biggest thing in 50 years because industry is betting its future that parallel programming will be useful.

Industry is building parallel hardware, assuming people can use it. And I think there's a chance they'll fail since the software is not necessarily in place. So this is a gigantic challenge facing the computer science community. If we miss this opportunity, it's going to be bad for the industry.

Imagine if processors stop getting faster, which is not impossible. Parallel programming has proven to be a really hard concept. Just because you need a solution doesn't mean you're going to find it.

JH If anything, a bit of self-reflection on what happened in the last decade shows that we - and I mean collectively the companies, research community, and government funders - became too seduced by the ease with which instruction-level parallelism was exploited, without thinking that the road had an ending. We got there very quickly - more quickly than I would have guessed - but now we haven't laid the groundwork. So I think Dave is right. There's a lot of work to do without great certainty that we will solve those problems in the near future.

Hennessy and Patterson both argue that one of the key pieces for addressing this problem is government funding (especially DARPA) for basic research in computer science. There's a dearth of government money flowing into academic computer science departments right now, even though the academy is especially suited to doing the kind of fundamental, long-term research that's needed to move the entire computer industry forward.

Regular Ars readers will recall that I touched on exactly this issue in an article entitled, "AT&T Labs vs. Google Labs: not your grandfather's R&D." I got a lot of feedback to that article from many sectors of the academy and industry (including the head of AT&T Labs), almost all of it in basic agreement with the article's main point about the need for a renewed national committment to funding blue-sky research.

Money spent on basic research is always money well spent, and companies like Intel and Microsoft should spend less of their lobbying efforts trying to raise the H1-B cap and spend more on trying to see that more such funding flows into American computer science departments.