Intel claims a new chip is first step toward exascale supercomputing

New "Xeon Phi" will ship this year—but exascale is still years away.

We told you today about the newly crowned world’s fastest supercomputer, which used IBM Blue Gene/Q chips and 1.6 million cores to hit a record 16 petaflops—a petaflop being one quadrillion, or a thousand trillion calculations per second. As if that isn’t fast enough, researchers are trying to build new architectures that would bring high-performance computing into the exascale range, 1,000 times faster than a petaflop.

Intel, whose chips are used in the majority of the world’s 500 fastest supercomputers, claimed today that the newly named “Xeon Phi” line of chips (out later this year) is an early stepping stone toward exascale. The Xeon Phi processors are built with the 22nm 3D Tri-gate Transistors that are also used in the consumer-focused Ivy Bridge chips. Xeon Phi will act in a similar way as the NVIDIA GPUs that serve to speed up many of the world’s fastest clusters. That is, it works as a “co-processor” alongside a server CPU to accelerate workloads.

You might be able to get to an exaflop just by connecting enough of today’s chips—but it wouldn’t be cost-efficient or energy-efficient, so a new architecture is needed. A total of 40 to 50 gigaflops of performance per watt is needed for exascale, John Hengeveld, director of marketing for high-performance computing at Intel, told the IDG News Service. The first Xeon Phi chip, code-named Knights Corner, will have more than 50 cores and deliver four or five gigaflops per watt. Intel says it’ll hit a teraflop in a single processor.

Clearly, exascale is still a ways away—that’s why Intel is targeting 2018 as the year it becomes reality. Petascale computing was first hit in 2008, and the world’s biggest clusters have soared more than an order of magnitude past that mark. But Moore’s Law alone won’t be enough to take HPC much further, some experts believe.

"We're at the point where the processors themselves aren't really getting any faster," Michael Papka of the Argonne National Laboratory—home of the third-fastest supercomputer—told Computerworld. Instead, increasing the size of clusters and improving the use of parallel processing is responsible for much of the speed gains we see each time a new version of the Top 500 supercomputers list is announced. Single-core performance has stagnated on the consumer side too, with speed gains coming from adding cores, running multiple threads per core, and other clever strategies.

Knights Corner was actually previewed one year ago, as Ars reported at the time. What’s new is the Xeon Phi name and Intel’s promise that it will be delivered this year—although an exact release date and pricing weren’t announced. The chips are far enough along that Intel says they’ll be used in a supercomputer called Stampede to be deployed next year at the Texas Advanced Computing Center, the IDG News Service reported. The Xeon Phi and its "Many Integrated Cores (MIC)" technology also makes an appearance on the Top 500 list in a 119-teraflop cluster ranked 150th in the world.

Promising exascale computing in 2018 is rather easy in 2012. But the race to exascale is on, and Intel will have plenty of competition.

Promoted Comments

They're called "three year old GPUs". My gaming box is made with two of them, a pair of Radeon HD 5750s. They're rated for 82 watts each and so deliver 12 gigaflops per watt flat out.

I imagine something modern would be even more efficient.

Yup. Video cards have been able to claim this type of performance for a couple years which is why they've been used in supercomputer clusters and NVIDIA and AMD have divisions targeting this use. The problem is that they are less flexible in how they can be used so not all calculations work well on the and it typically requires more effort to get code running well on them. The Intel Xeon Phi chips will be using using the x86 architecture and will be more capable and more familiar for people to work with.

This is the general trade off that is almost always present. The more focused the design of a chip is the faster and/or more efficient it can be at performing those tasks. The more flexible it is the bigger/slower it will be. There can be many different points on this spectrum that are useful for different situations and are profitable for different companies to pursue.

46 Reader Comments

So, what does that all mean? What do those ballscrushingly fast computers do all day? What are they used for? Clearly not Crysis. . . How does increasing speed that much benefit the rest of us? Will we be able to buy versions of those to make reddit load faster?

Sorry for all the questions, I just don't see a practical application to the epeen contest. It sounds cool though. . .

It can be really tough to write multi-threaded code. I have a 6 core AMD, and yes it's fast, and yes, sometimes I can run simulations and such that use all six core, but most of the time four of the six are idle, and only one is doing something I coded.

It's too bad we aren't seeing more gains within a core. I understand the physics and I know the reasons, but it's still disappointing.

So, what does that all mean? What do those ballscrushingly fast computers do all day? What are they used for? Clearly not Crysis. . . How does increasing speed that much benefit the rest of us? Will we be able to buy versions of those to make reddit load faster?

Sorry for all the questions, I just don't see a practical application to the epeen contest. It sounds cool though. . .

The world's fastest supercomputer today is going to be used for nuclear simulation, which extends the lifespan of nuclear weapons. (it was in the previous article the first paragraph links to). Molecular modeling and weather simulation would be some of the things done today by many HPC clusters.

So, what does that all mean? What do those ballscrushingly fast computers do all day? What are they used for? Clearly not Crysis. . . How does increasing speed that much benefit the rest of us? Will we be able to buy versions of those to make reddit load faster?

Sorry for all the questions, I just don't see a practical application to the epeen contest. It sounds cool though. . .

The world's fastest supercomputer today is going to be used for nuclear simulation, which extends the lifespan of nuclear weapons. (it was in the previous article the first paragraph links to). Molecular modeling and weather simulation would be some of the things done today by many HPC clusters.

Wait molecular modeling, why would that take up so much space? Would it be like a Google Maps on steroids where you could zoom down to the individual cell of a particular species of creature and see said cell in creature react dynamically (like a T Cell attaching itself to a virus)

This is a perfectly nice write up and all, but, **sigh**, I miss Stokes.

We were going to build a Stokes hologram to write chip stories for us, but it requires an exascale computer. In the meantime I linked back to one of Stokes's old stories in the sixth paragraph. Hope you enjoy it!

So, what does that all mean? What do those ballscrushingly fast computers do all day? What are they used for? Clearly not Crysis. . . How does increasing speed that much benefit the rest of us? Will we be able to buy versions of those to make reddit load faster?

Sorry for all the questions, I just don't see a practical application to the epeen contest. It sounds cool though. . .

Not everything needs to be translated into a simplistic consumer perspective - there are many scientific and engineering applications that can use all the processing power one can deliver. If you think this whole thing is an "epeen contest" ... I suggest you broaden your horizons by looking into the computational needs of people running simulations in varied areas such as protein synthesis, astrophysics, semiconductor transport and neurosciences just to name a few. Hell, even a simple script running on one of my data files with over 200-300k records will load up all my cores and bring things to a slow crawl - and that's just simple data crunching, not even complex iterative mathematics.

Sorry if I came across as snarky ... its just that we hear the same argument every time a more powerful processor is mentioned ... and it makes me weary.

So, what does that all mean? What do those ballscrushingly fast computers do all day? What are they used for? Clearly not Crysis. . . How does increasing speed that much benefit the rest of us? Will we be able to buy versions of those to make reddit load faster?

Sorry for all the questions, I just don't see a practical application to the epeen contest. It sounds cool though. . .

Not everything needs to be translated into a simplistic consumer perspective - there are many scientific and engineering applications that can use all the processing power one can deliver. If you think this whole thing is an "epeen contest" ... I suggest you broaden your horizons by looking into the computational needs of people running simulations in varied areas such as protein synthesis, astrophysics, semiconductor transport and neurosciences just to name a few. Hell, even a simple script running on one of my data files with over 200-300k records will load up all my cores and bring things to a slow crawl - and that's just simple data crunching, not even complex iterative mathematics.

Sorry if I came across as snarky ... its just that we hear the same argument every time a more powerful processor is mentioned ... and it makes me weary.

No worries, it's why I asked. I didn't really mean it in such a base-consumer perspective, I should have asked how do these machines get used outside of the shops that build them? Who gets to use them - is time sharing involved or something along those lines? I think it's more the tone of these "ERRRMERRGERRRRDDDD!!! NEWEST FASTERST SUPAHCOMPUTAZ!!" articles that make it seem like an E-peen contest. Jon did a better job at avoiding that in this particular write-up, but the other handful of articles I saw on this topic today were totally of the "EPEEN! USA! #1!!!" variety.

They're called "three year old GPUs". My gaming box is made with two of them, a pair of Radeon HD 5750s. They're rated for 82 watts each and so deliver 12 gigaflops per watt flat out.

I imagine something modern would be even more efficient.

I was curious about this as well. Even ATI boasted how the 4870 was one of the first GPU's to run over 1TB all on its own. One thing I can think of though is the level of error correction and increase in long term reliability as most of the 48XX Series GPU's (And coming up to some of the 58XX series) are starting to or are burning out. For project that these systems are going to be put to work on, reliability and accuracy are probably a bit more crucial than normal Consumer Gaming demands. But then again, Gaming has helped push the requirements for more speed and efficiency in the first place...

So, what does that all mean? What do those ballscrushingly fast computers do all day? What are they used for? Clearly not Crysis. . . How does increasing speed that much benefit the rest of us?

There are myriads of uses for supercomputers that provide insight into natural phenomena, can help us design better pharmaceuticals, etc.:

- Blood flow/hemodynamics simulation: Clinical needs in thrombosis risk assessment, anti-coagulation therapy, and stroke research would significantly benefit from an improved understanding of the microcirculation of blood.A. Peters et al., "Multiscale Simulation of Cardiovascular flows on the IBM Bluegene/P: Full Heart-Circulation System at Red-Blood Cell Resolution", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2010A. Rahimian et al., "Petascale Direct Numerical Simulation of Blood Flow on 200K Cores and Heterogeneous Architectures", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2010M. Bernaschi et al., "Petaflop Biofluidics Simulations On A Two Million-Core System", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2011

- Earthquake simulation: Petascale simulations are needed to understand the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures (>1 Hz).Y. Cui, "Scalable Earthquake Simulation on Petascale Supercomputers", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2010

- Discovering new materials: The mechanical properties and performance of metal materials depend on the intrinsic microstructures in these materials. In order to develop engineering materials and to enable design with multifunctional materials, it is essential to predict the microstructural patterns, such as dendritic structures, observed in solidified metals.T. Shimokawabe et al.. "Peta-scale Phase-Field Simulation for Dendritic Solidification on the TSUBAME 2.0 Supercomputer", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2011

- Molecular dynamics simulation: Molecular dynamics simulations of biological molecules give scientists the ability to trace atomic motions and have helped yield deep insights into molecular mechanisms that experimental approaches could not have achieved alone, e.g., the “folding” of proteins into their native three-dimensional structures, the structural changes that underlie protein function, and the interactions between two proteins or between a protein and a candidate drug molecule.D. E. Shaw et al., "Millisecond-Scale Molecular Dynamics Simulation on Anton", Proceedings of the ACM International Conference for High Performance Computing, 2009

So, what does that all mean? What do those ballscrushingly fast computers do all day? What are they used for? Clearly not Crysis. . . How does increasing speed that much benefit the rest of us?

There are myriads of uses for supercomputers that provide insight into natural phenomena, can help us design better pharmaceuticals, etc.:

- Blood flow/hemodynamics simulation: Clinical needs in thrombosis risk assessment, anti-coagulation therapy, and stroke research would significantly benefit from an improved understanding of the microcirculation of blood.A. Peters et al., "Multiscale Simulation of Cardiovascular flows on the IBM Bluegene/P: Full Heart-Circulation System at Red-Blood Cell Resolution", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2010A. Rahimian et al., "Petascale Direct Numerical Simulation of Blood Flow on 200K Cores and Heterogeneous Architectures", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2010M. Bernaschi et al., "Petaflop Biofluidics Simulations On A Two Million-Core System", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2011

- Earthquake simulation: Petascale simulations are needed to understand the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures (>1 Hz).Y. Cui, "Scalable Earthquake Simulation on Petascale Supercomputers", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2010

- Discovering new materials: The mechanical properties and performance of metal materials depend on the intrinsic microstructures in these materials. In order to develop engineering materials and to enable design with multifunctional materials, it is essential to predict the microstructural patterns, such as dendritic structures, observed in solidified metals.T. Shimokawabe et al.. "Peta-scale Phase-Field Simulation for Dendritic Solidification on the TSUBAME 2.0 Supercomputer", Proceedings of the ACM/IEEE International Conference for High Performance Computing, 2011

- Molecular dynamics simulation: Molecular dynamics simulations of biological molecules give scientists the ability to trace atomic motions and have helped yield deep insights into molecular mechanisms that experimental approaches could not have achieved alone, e.g., the “folding” of proteins into their native three-dimensional structures, the structural changes that underlie protein function, and the interactions between two proteins or between a protein and a candidate drug molecule.D. E. Shaw et al., "Millisecond-Scale Molecular Dynamics Simulation on Anton", Proceedings of the ACM International Conference for High Performance Computing, 2009

Of course, beyond this small subset of examples there are countless others: full brain simulations, weather simulations, atomic-scale physics simulations, Big Bang simulations...

[citation needed]

I kid. I kid. Well played, Sir. Full brain simulations, that I could get behind. Weather simulations, not so much.

Just curious how the Itanium chips compare to these. Are they still undergoing active development at this point?

Good question. Is Itanium dying on the vine? Maybe Ars and other sites I frequent don't write many (enough) articles on the space. I don't recall many breakdowns of Power7 here either and that CPU is doing quite well still. Then there's the moribund SPARC that I used to love.

CPU discussions and releases used to be so much more interesting when x86 had 3 to 4 players, PowerPC exited outside embedded applications, Power was adding capabilities every release, Alpha was quick as hell though MS wouldn't recompile their apps for it to use the 64bits, and SGI made a decent MIPS box that had a kickass 3D welcome/config (for the 90s my O2 was sweet).

They're called "three year old GPUs". My gaming box is made with two of them, a pair of Radeon HD 5750s. They're rated for 82 watts each and so deliver 12 gigaflops per watt flat out.

I imagine something modern would be even more efficient.

I was curious about this as well. Even ATI boasted how the 4870 was one of the first GPU's to run over 1TB all on its own. One thing I can think of though is the level of error correction and increase in long term reliability as most of the 48XX Series GPU's (And coming up to some of the 58XX series) are starting to or are burning out. For project that these systems are going to be put to work on, reliability and accuracy are probably a bit more crucial than normal Consumer Gaming demands. But then again, Gaming has helped push the requirements for more speed and efficiency in the first place...

Really it has to do with how easy it is to take advantage of the all of those flops. While GPUs have traditionally been great at heavily data parallel workloads, they stumble when trying to handle branch heavy code, more precise error and interrupt handling, virtualization, or memory coherency/consistency. A large number of more traditional CPUs will always have an advantage at extracting the efficiency from more complex multi-threaded workloads, but will sacrifice some efficiency compared to workloads that "fit-just-right" into the GPU data parallel paradigm.

They're called "three year old GPUs". My gaming box is made with two of them, a pair of Radeon HD 5750s. They're rated for 82 watts each and so deliver 12 gigaflops per watt flat out.

I imagine something modern would be even more efficient.

Yup. Video cards have been able to claim this type of performance for a couple years which is why they've been used in supercomputer clusters and NVIDIA and AMD have divisions targeting this use. The problem is that they are less flexible in how they can be used so not all calculations work well on the and it typically requires more effort to get code running well on them. The Intel Xeon Phi chips will be using using the x86 architecture and will be more capable and more familiar for people to work with.

This is the general trade off that is almost always present. The more focused the design of a chip is the faster and/or more efficient it can be at performing those tasks. The more flexible it is the bigger/slower it will be. There can be many different points on this spectrum that are useful for different situations and are profitable for different companies to pursue.

Just curious how the Itanium chips compare to these. Are they still undergoing active development at this point?

Good question. Is Itanium dying on the vine? Maybe Ars and other sites I frequent don't write many (enough) articles on the space. I don't recall many breakdowns of Power7 here either and that CPU is doing quite well still. Then there's the moribund SPARC that I used to love.

CPU discussions and releases used to be so much more interesting when x86 had 3 to 4 players, PowerPC exited outside embedded applications, Power was adding capabilities every release, Alpha was quick as hell though MS wouldn't recompile their apps for it to use the 64bits, and SGI made a decent MIPS box that had a kickass 3D welcome/config (for the 90s my O2 was sweet).

They're called "three year old GPUs". My gaming box is made with two of them, a pair of Radeon HD 5750s. They're rated for 82 watts each and so deliver 12 gigaflops per watt flat out.

I imagine something modern would be even more efficient.

[...]Really it has to do with how easy it is to take advantage of the all of those flops. While GPUs have traditionally been great at heavily data parallel workloads, they stumble when trying to handle branch heavy code, more precise error and interrupt handling, virtualization, or memory coherency/consistency. A large number of more traditional CPUs will always have an advantage at extracting the efficiency from more complex multi-threaded workloads, but will sacrifice some efficiency compared to workloads that "fit-just-right" into the GPU data parallel paradigm.

Yup! Another issue I've seen with GPU computing is bandwidth. The controller for a computer's RAM is generally built into the CPU, meaning that, unless you want to deal with a large amount of latency between computations, you'll need to saty under the GPU's measly amount of RAM (measly meaning the highest RAM I've seen in a GPU is 8GB, where it's not atypical to see 128GB of RAM per one board for a high performance cluster). Thus, for data intense work (such as weather simulations), GPUs simply are not as efficient as CPUs (if you can thread it enough). It's all a game of balancing the number of threads, thread speed, and memory latency (and storage and network latency in some cases).

I think this is going to be interesting to see how this is going to stack up against nvidia's Tesla technology.

Cheers

Best i can tell, this is a x86-GPU hybrid. It runs x86 code, but is designed more like a GPU than a traditional CPU. And if the on-board Linux is true, it acts pretty much like a cluster blade crammed onto a PCIe board. And there have been some "desktop clusters" showing up lately. either micro-atx or mini-itx stacks with dedicated networking stuffed into a server cube like case. Given that they act pretty much like the room sized clusters of "old", one can take existing code and run it on something that fit on/under a desk. This in pretty much the same way as how academia has embraced OSX because it is a off the shelf personal computer running *nix.

No worries, it's why I asked. I didn't really mean it in such a base-consumer perspective, I should have asked how do these machines get used outside of the shops that build them? Who gets to use them - is time sharing involved or something along those lines? I think it's more the tone of these "ERRRMERRGERRRRDDDD!!! NEWEST FASTERST SUPAHCOMPUTAZ!!" articles that make it seem like an E-peen contest. Jon did a better job at avoiding that in this particular write-up, but the other handful of articles I saw on this topic today were totally of the "EPEEN! USA! #1!!!" variety.

Thanks for explaining ... I understand your frustrations. I guess its more symptomatic of how "news" as a whole is disseminated nowadays - there seems to be no neutral perspective (although I thought this article did a good job).

Hat Monster wrote: wrote:

We already have teraflop single chip co-processors.

They're called "three year old GPUs". My gaming box is made with two of them, a pair of Radeon HD 5750s. They're rated for 82 watts each and so deliver 12 gigaflops per watt flat out.

I imagine something modern would be even more efficient.

On this branch of thought, to add to the good points by hangslice and evan_s ... not all computations can be made extremely parallel, and the bottleneck in those cases tends to become branches or conditionals and the theoretical throughput is not achieved. If there is good symbiosis between the code/compiler/CPU/GPGPU, maybe the bottlenecking can be reduced for massive throughput, and this is AMD/Nvidia's approach. Intel is going the other way with a many-x86-core system and the assumption that this will circumvent some of these problems. Both may work depending on what a given user requires.

I guess no one actually looked at the new top 500 list. There is a Xeon Phi system in there - ranked 150. 9800 cores which is able to attain 119 TFlop out of a theorectical 181 TFlop max while consuming 100 kW. Having said that, performance per watt is pretty good but not that impressive. That Xeon Phi system only edges out Sandybridge Xeon cluster by about 12% in terms of performance per watt. It does beat out several GPU solutions in terms of performance per watt but neither AMD nor nVidia have launched new GPU compute products this year that'll narrow the performance per watt gap or even over take the Xeon Phi.

As much as Intel likes to tout performance per watt, the real shocker is the power efficiency of the new BlueGene/Q systems. The BlueGene/Q offers 75% greater performance per watt as the Xeon Phi cluster.

Just curious how the Itanium chips compare to these. Are they still undergoing active development at this point?

Good question. Is Itanium dying on the vine? Maybe Ars and other sites I frequent don't write many (enough) articles on the space. I don't recall many breakdowns of Power7 here either and that CPU is doing quite well still. Then there's the moribund SPARC that I used to love.

CPU discussions and releases used to be so much more interesting when x86 had 3 to 4 players, PowerPC exited outside embedded applications, Power was adding capabilities every release, Alpha was quick as hell though MS wouldn't recompile their apps for it to use the 64bits, and SGI made a decent MIPS box that had a kickass 3D welcome/config (for the 90s my O2 was sweet).

Now, kids get off my lawn!!

Haven't been keeping up with power pc/power /blue gene ?See link in first line of the articleAlso look up power pc a2 https://en.wikipedia.org/wiki/PowerPC_A2Plus more than a few million+ powerpc chips of a variety in the current wii and the new one when ever its out.

The only vliw chips that seem to be going strong are TI's c6400 dsps which are in the OMAP chips in a lot of phones.

Most used mips chips pic32 from microchip or the chinese mips clones or one of the broadcom coms chips?

Just curious how the Itanium chips compare to these. Are they still undergoing active development at this point?

Good question. Is Itanium dying on the vine? Maybe Ars and other sites I frequent don't write many (enough) articles on the space. I don't recall many breakdowns of Power7 here either and that CPU is doing quite well still. Then there's the moribund SPARC that I used to love.

Itanium series 9500 part numbers have just started to float around so it looks like Poulson based Itaniums are due anytime now. With HP being the only major Itanium vendor, I'd expect a formal roll out for the new Itnaiums when HP has validated new systems and ready to ship them. This is a fresh new micro architecture for Itanium which is worth exploring. It continues to rely on compiler based scheduling but its new implementation of symmetric multithreading should should improve overall throughput. Having said that, it likely won't leap frog the competition except for its own predecessor. Due to massive delays (the 65 mn based Itaniums were released after Intel was shipping 32 nm x86 processors), the competition is far ahead of the elder Itaniums.

IBM has the POWER7+ waiting in the wing. The big improvement is that a leaked document indicate a 10 MB L3 eDRAM based cache per core. Core count is still unknown and their are rumors that the L3 cache actually resides on a separate die using silicon through via. If true, IBM could push core counts and clock speeds radically higher. No major core improvements are expected an increase in core count and clock speed should be more than enough to keep it competitive. POWER8 is due out late next year and use a 22 nm manufacturing process.

Intel has a new quad socket x86 platform based around the LGA 2011 form factor due out over the summer. Intel isn't killing off the LGA 1567 platform due to differences in core count, RAS feature and support for 8 socket systems. For most businesses though, the quad socket LGA 2011 platform will be 'good enough'

SPARC is seeing a bit of renaissance through Oracle. Around the time of the Oracle/Sun merger the Rock processor was cancelled. Recent road maps hint at a successor to Rock is in development or that some of the unique technology inside of Rock is being reused in another SPARC core. The release date for this chip is further off though.

equals42 wrote:

CPU discussions and releases used to be so much more interesting when x86 had 3 to 4 players, PowerPC exited outside embedded applications, Power was adding capabilities every release, Alpha was quick as hell though MS wouldn't recompile their apps for it to use the 64bits, and SGI made a decent MIPS box that had a kickass 3D welcome/config (for the 90s my O2 was sweet).

Business politics is what killed off several of the x86 competitors, most notably the Alpha. PowerPC is still around in the embedded market but ARM is all the rage, partially because it is encroaching on the x86 mobile ecosystem. PowerPC technology is going to be used in the Wii U and likely another gaming console for the coming generation. The only other ISA to have market share in the PC/server market would be SPARC and business politics is pretty much the reasons why it is still alive. Much like Itanium, recent designs have allowed it be competitive again.

I wonder if this is why Apple delayed a major update to the Mac Pro? That would make a lot more sense.

As pointed out in the interplay with jcool, dlux, fitten & FreeFire the prices would be exorbant & it still doesn't make sense to to such a minor spec bump (played as new, when it could have been done last year) to a platform you are going to rehash in 6 months or so. (I want to put something in here about tick, tock & tack but anyway...)

I wonder if this is why Apple delayed a major update to the Mac Pro? That would make a lot more sense.

As pointed out in the interplay with jcool, dlux, fitten & FreeFire the prices would be exorbant & it still doesn't make sense to to such a minor spec bump (played as new, when it could have been done last year) to a platform you are going to rehash in 6 months or so. (I want to put something in here about tick, tock & tack but anyway...)

I have no idea what the pricing would be for something like this, but really, the larger point is, Apple better have something pretty special in mind for the MacPro in 2013 to make up for the fail they released a couple weeks ago.