Intel CIO Talks Big Data and the Benefits of Single-Socket Servers

As a CIO charged with supporting a very technical workforce, Intel's Kim Stevenson has seen some big changes, both in the clients and in the various data centers she supports.

When you think about Intel, you probably think of a company that makes the processors that control most of our PCs and the world's data centers. But, of course, Intel also uses a lot of its own processors in a lot of ways: running the business operations of the company, running the factories that create chips, and in running the tools that help designers create next-generation chips.

So I was interested to talk to Intel CIO Kim Stevenson recently about some of the ways the company is using technology. As a CIO charged with supporting a very technical workforce, Stevenson has seen some big changes, both in the clients and in the various data centers she supports.

Although the company uses some SaaS products—for things such as human capital management and expense accounts—the bulk of the computing power is still within Intel's own data centers. That's because the company runs mission-critical applications for developing intellectual property, manufacturing, customer service, and product development, and thus far, these work better internally, Stevenson said. But she did say she was open to more cloud services, as Intel likes to "exploit innovation" wherever it happens, although the company is very sensitive about its proprietary data.

Intel has a high-performance computing data center consisting of 50,000 servers in California and Oregon, where many of its chip designers are located. She said this gets 88 to 90 percent utilization at all hours of every day, with many jobs queued for when fewer people are working.

Across Intel's data centers around the globe, it has about 63,000 Intel Xeon Processor based 2-socket, 1-socket, and 4-socket servers, with a total of 630,000 Xeon Cores in what it calls its Intel Hyperscale Design Compute Grid. In last six months alone, the company has deployed more than 22,000 servers based on the current "Haswell" generation of processors. Currently, about two-thirds of this grid is made up of two-socket servers and one-third is single-socket servers, with the 1-socket servers (primarily Haswell-based Xeon E3s) contributing about 88,000 cores out of the total 630,000. In general, she said using single-socket servers compared with double-socket servers shows a modest improvement in performance, but often a much larger decrease in software costs, due to the way that way EDA (Electronic Design Automation) software is licensed per core.

Intel has lately tried moving to 4 single-socket servers instead of 1 two-socket server for equal throughput. Because the total number of cores in single-socket server cluster is smaller than the two-socket server cluster for equal design application throughput, and because the software licenses are now about four times the hardware cost, it is seeing a significant benefit in license costs. And because it is seeing 35 percent faster performance with the single-socket servers, it is reducing the yearly growth in demand for application licenses.

She said Intel is in the process of getting rid of hard drives, and replacing them with SSDs and flash storage, as it shows such big improvements in applications such as graphics and engineering productivity. I asked about Xeon Phi, the company's many-core solution for high-performance computing, but she said her group has just started looking at it.

On the client side, she's also seen a migration to flash storage, with the company choosing encrypted SSDs as it cares so much about its intellectual property. As with most large companies, Intel has a replacement cycle that varies by the kind of work people are doing. Of new purchases, Stevenson said most users were choosing "2 in 1s," which is perhaps not surprising since the company has been so strongly pushing that concept.

Intel has moved to a BYOD process for mobile devices, with 25,000 users getting their mail in a container using a mobile device management platform.

On the manufacturing side, Intel is also using processing power and "big data" to reduce costs and to improve efficiency.

The process of making chips involves all sorts of complicated tools, each of which has to be meticulously calibrated to reduce errors. Chip wafers move from one machine to another for various steps in the process—often deposition, lithography, and etching for multiple layers—and at each step they generate data. (A wafer then gets subdivided into multiple individual chip dies, anywhere from around 100 to several thousand depending on the kind of chip that is made.)

She said Intel has been working hard to use the data from each machine to help calibrate not only that one machine, but to help improve the entire process, so that in the fab every machine talks to the others. In part, that's to reduce defects, but also to help spot them as early as possible in the process, where it is less expensive. (After a wafer is created, it then goes through packaging and test processes.) Stevenson says this is part of a multi-year project to use data to help reduce errors, and said Intel is "just at the beginning" of this process.

Of course, that's not the only use of big data in the company. It also uses data to help in visualizations, and just in helping speed the time to market of all of the company's products.

Michael J. Miller's Forward Thinking Blog: forwardthinking.pcmag.com
Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. From 1991 to 2005, Miller was editor-in-chief of PC Magazine, responsible for the editorial direction, quality and presentation of the world's largest computer publication.
Until late 2006, Miller was the Chief Content Officer for Ziff Davis Media, responsible for overseeing the editorial positions of Ziff Davis's magazines, websites, and events. As Editorial Director for Ziff Davis Publishing since 1997, Miller took an active role in...
More »