The 8086 in Context

PCW: What are some of the distinguishing characteristics of the 8086 that made it stand out from other microprocessors of the day?

SM: Its most distinguishing characteristic was that it was a 16-bit microprocessor. I believe it was the first commercial 16-bitter in the microprocessor field. But the characteristics that I liked the most, and had the most fun designing and unifying, were the decimal arithmetic instructions and the string-processing instructions.

PCW: Why did Intel start making backward-compatible CPUs, and why did the company do it so well compared with other CPU manufacturers?

SE: The reason that Intel was concerned about backward-compatibility (and the reason everyone is, as well) is that you have a captured market base that you don't want to lose. If you have customers all using the 8008, when you come out with your 8080 processor you want your customers to be able to migrate their existing applications easily. If they had to rewrite all their applications, they would also be free to consider a new processor from the competition.

That's a lesson that Zilog learned the hard way. Zilog made its first splash with the Z80; that chip was compatible with Intel's 8080, so Zilog was able to steal Intel's customers easily. And it became a significant player in the marketplace. Then when the 16-bit race started, Zilog figured it had made a name for itself and could afford to do its own incompatible design for a 16-bit product, called the Z8000. But once Zilog's own customers discovered that programs could no longer be migrated from the Z80 forward, those customers became free to look around at the 16-bit marketplace, and they chose the 8086. Had Zilog gone with a 16-bit compatible upgrade of the Z80, history might have been different.

PCW: Was the 8086 designed with future backward-compatibility in mind?

SM: Backward-compatibility was certainly an issue when the 8086 was being designed. There were some instructions that were implemented and then hidden because we couldn't see a logical upgrade path for them in future processors. These instructions were actually on the chip, but we never documented them so that we would not be constrained by them in the future.

PCW: Can you share any funny, interesting, or unusual anecdotes about the 8086 that we haven't covered already?

SM: I always regret that I didn't fix up some idiosyncrasies of the 8080 when I had a chance. For example, the 8080 stores the low-order byte of a 16-bit value before the high-order byte. The reason for that goes back to the 8008, which did it that way to mimic the behavior of a bit-serial processor designed by Datapoint&#160;(a bit-serial processor needs to see the least significant bits first so that it can correctly handle carries when doing additions). Now there was no reason for me to continue this idiocy, except for some obsessive desire to maintain strict 8080 compatibility. But if I had made the break with the past and stored the bytes more logically, nobody would have objected. And today we wouldn't be dealing with issues involving big-endian and little-endian--the concepts just wouldn't exist.

Another thing I regret is that some of my well-chosen instruction mnemonics were renamed when the instruction set was published. I still think it's catchier to call the instruction SIGN-EXTEND, having the mnemonic of SEX, than to call it CONVERT-BYTE-TO-WORD with the boring mnemonic CBW.