Intel Admits It Overstated Chips' Speed

By JOHN MARKOFF

Published: January 6, 1996

SAN FRANCISCO, Jan. 5—
An embarrassed Intel Corporation acknowledged today that a bug in a software program known as a compiler had led the company to overstate the speed of its microprocessor chips on an industry benchmark by about 10 percent.

However, industry analysts said the coding error, which was made in a pre-production version of the Intel software, was not an indictment of the technology of the world's largest chip maker but rather a sad commentary on a common industry practice of "cheating" on standardized performance tests.

Fred Pollack, the director of Intel's Measurement, Architecture and Planning Group, said the error was pointed out to Intel two days ago by a competitor, Motorola, and the company decided it was important to go public as quickly as possible.

Intel, which is based in Santa Clara, Calif., suffered a public relations black eye a year ago after the company appeared to ignore the significance of a math error in its Pentium processor. Ultimately the company backpedaled and offered to unconditionally replace the chip.

The error came in a test known as SPECint92 used in the computer industry to measure the speed of a computer on simple mathematical operations. In its technical commentary on the test results, Intel had acknowledged that it had "optimized" its compiler to improve its test scores. The company had also said that it did not like the practice but felt compelled to make the optimizations because its competitors were doing the same thing.

"It was an honest mistake," said Michael Slater, publisher of the Microprocessor Report, an industry newsletter. "Intel was pretty clean in saying we had to play the game, but we don't think its a good thing. Unfortunately when they decided to play the game they played it wrong."

The SPEC benchmark series was originally compiled by an industry consortium in an effort to put an end to wildly conflicting performance claims made by different computer companies in the late 1980's. The measure is now administered by SPEC, the Standard Performance Evaluation Corporation, a nonprofit group that is sponsored by 24 computer makers.

The tests are an effort to put a computer through a series of exercises that closely mirror real world computing problems. But because software is developed to run the tests more quickly on a given piece of computer technology and because computer designs continue to evolve, the tests have to be continually updated.

The SPEC benchmarks are generally used by technical and scientific customers in performance evaluations of competing computer systems, rather than consumers evaluating personal computers. For example, a company like Dow Chemical or a Government agency like the Department of Defense might review the SPECmark numbers when deciding which work stations to purchase.

At the heart of Intel's problem is the practice of "tuning" compiler programs to recognize certain computing problems in the test and then substituting special handwritten pieces of code that are designed to solve the problems much faster than normal. But that piece of code had a bug that skewed the results.

Many industry experts now tend to reject specialized tests because it is relatively easy to skew results by compiler tuning.