Intel Announces MIC Xeon Phi For Exascale Computing

Article Index

Software compatibility is surprisingly important

By the time Xeon Phi actually ships in November, Kepler's big brother, K20, should also be ready to go. Nvidia certainly paints a picture of confidence, with a number of blog posts and product updates pointing towards CUDA education centers, the growth of GPGPU deployments, and Tesla's contribution to high-end computing -- but the scientists we've spoken to who have used Intel's Many Integrated Core products shed light on why Intel's x86 compatibility may win the company more long-term business.

Software Compatibility Still Matters
The scientists who work with the kinds of problems Tesla and Xeon Phi are meant to solve have invested years in creating the models and software solutions that they use. According to those we spoke to, the underlying code is an "overhead cost" -- something they have to deal with in order to further their research goals, but not the point or focus of the research itself.

The advantage of Knights Corner is that it provides excellent scaling out of the box when tested using OpenMP and MPI (Message Passing Interface). The groups we spoke to emphasized that while additional optimization would improve performance, baseline scaling from simply running code on Xeon Phi as opposed to a standard x86 cluster was excellent.

This puts Intel on a collision course with Nvidia, and the results may not be pretty. A review of NV's 10-K filings shows that the company claims strong Tesla sales in recent years (revenue in the Professional Solutions Group, which includes both Tesla and Quadro, grew 60% in FY 2011 [calender 2010] thanks to Fermi. Sales in that area in 2012 [Nvidia's fiscal year 2013] have been flat year-on-year).

There's no denying that Nvidia has worked tremendously hard to launch GPGPU or that it's created some business momentum around Tesla. The big unanswered question is whether that momentum will sustain it once Intel launches Knights Corner.

Intel's chief advantage in this realm is that it builds the chips that power the compute clusters now and the chips it suggests researchers use in the future, without any intrinsic need to recompile code or learn new practices. Nvidia is clearly tuning K20 to answer Santa Clara -- we'll see if its enough.