Floating point units are standard on CPUs today, and even desktops might use them today (3D effects). However, I wonder which applications have initially driven the development and mass adoption of floating point units in history.

Ten years ago, I think most uses of floating point arithmetics constituted in either

Engineering and Science applications

3D graphics in computer games

I think for any application where decimal numbers might have appeared at those times, fixed point arithmetic has been sufficient (2D graphics) or even desirable (finance). Usage of integers would have been sufficient then.

I think these two applications have been the major motivation to establish floating point arithmetic in hardware as a standard. Can you name others, or is there a compelling reason to disagree?

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.

4 Answers
4

I think the real driver was silicon processes. Once circuits were shrunk small enough that there was room for floating-point units, they were incorporated. Same for MMUs and memory controllers. Engineers abhor empty die space.

From what I recall, it got to the point that it took an extra step to disable the FPU. Intel provided the exact same processor in two models, one with and one without the FPU. To make the model without an FPU they took the standard model and severed the link.
–
Mike BrownMar 7 '12 at 18:36

In 1822, Charles Babbage designed a difference engine. A difference engine is an automatic, mechanical calculator designed to tabulate polynomial functions. The London Science Museum constructed a working difference engine from 1989 to 1991.

The need for floating point calculations has been with us since the dawn of computing.

@Doc: Because fixed-point, by definition, can't give you arbitrary levels of precision the way floating-point can. (Yes, floating-point comes with its own tradeoffs to offer that, but it's generally seen as more useful in most contexts.)
–
Mason WheelerMar 7 '12 at 17:20

@Gilbert: I suspect that this was the question the OP really meant, since he was also mentioning fixed point in contrast to floating point.
–
Doc BrownMar 7 '12 at 17:30

5

@Doc: Floating point numbers have approximately constant relative precision throughout their range, while fixed point numbers have constant absolute precision. In physical measurements, we are usually much more interested in relative precision. We measure machined parts to the micron, but measure driving distance in tenths of miles.
–
kevin clineMar 7 '12 at 18:17

I worked on PC's during a time where a floating point co-processor was an optional extra. You had to pay a significant extra cost to have an 80x87 chip added to an 80x86 system and few programs took advantage of it.

One exception was the first real killer-app for the IBM-PC, the ubiquitous spreadsheet program Lotus 1-2-3. This supported floating-point operations in hardware from relatively early on, substantially speeding up certain operations if you had an FPU.

When Intel got to the 80486, they started integrating the floating point unit onto the CPU, but even then they offered the 486SX variant with the FPU present but disabled. This was substantially cheaper than the 486DX chip and many people took that option to keep costs down.

By this point, the incremental cost in silicon terms must have been lower than the additional costs of the R&D and tooling to create separate 486SX, 487SX and 486DX chips. In fact, if you bought a 486SX system and later added a 487SX co-processor, you effectively had two whole 486DX CPU's both with different halves of the chip disabled!

By the time the Pentium came around, floating point units were expected, and it's infamous FDIV bug caused quite a storm, not just in the scientific community, but in the business community too.

Computers are that computers. Scientific applications, starting from table computations, which need floating point (or carefully manual usage of a scale with fixed point) has always been an important aspect.

AFAIK, the first computer with floating point -- in hardware -- was the Zuse Z4 mid 40's. The first "common" machine with FP capability is probably the IBM 704 about mid 50's.

For the Intel x86 family, the 8087 co-processor was announced in 80 and until the FPU was integrated with the rest of the processor (which happened early 90's), there always has been co-processor available, even third-party one. At that time, serious scientific applications weren't done on PC, but spreadsheets were among the programs to benefit from having a math coprocessor.