Scientific notation

Scientific notation (also referred to as scientific form or standard index form, or standard form in the UK) is a way of expressing numbers that are too big or too small to be conveniently written in decimal form. It is commonly used by scientists, mathematicians and engineers, in part because it can simplify certain arithmetic operations. On scientific calculators it is usually known as "SCI" display mode.

Contents

Any given real number can be written in the form m×10^n in many ways: for example, 350 can be written as 7002350000000000000♠3.5×102 or 7002350000000000000♠35×101 or 7002350000000000000♠350×100.

In normalized scientific notation (called "standard form" in the UK), the exponent n is chosen so that the absolute value of m remains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as 7002350000000000000♠3.5×102. This form allows easy comparison of numbers, as the exponent n gives the number's order of magnitude. In normalized notation, the exponent n is negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as 6999500000000000000♠5×10−1). The 10 and exponent are often omitted when the exponent is 0.

Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized form, such as engineering notation, is desired. Normalized scientific notation is often called exponential notation—although the latter term is more general and also applies when m is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (for example, 3.15×2^20).

Engineering notation (often named "ENG" display mode on scientific calculators) differs from normalized scientific notation in that the exponent n is restricted to multiples of 3. Consequently, the absolute value of m is in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, 6992125000000000000♠12.5×10−9 m can be read as "twelve-point-five nanometers" and written as 6992125000000000000♠12.5 nm, while its scientific notation equivalent 6992125000000000000♠1.25×10−8 m would likely be read out as "one-point-two-five times ten-to-the-negative-eight meters".

A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroes indicated to be significant.
Leading and trailing zeroes are not significant because they exist only to show the scale of the number. Therefore, 1,230,400 usually has five significant figures: 1, 2, 3, 0, and 4; the final two zeroes serve only as placeholders and add no precision to the original number.

When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the place holding zeroes are no longer required. Thus 1,230,400 would become 1.2304 × 106. However, there is also the possibility that the number may be known to six or more significant figures, in which case the number would be shown as (for instance) 1.23040 × 106. Thus, an additional advantage of scientific notation is that the number of significant figures is clearer.

It is customary in scientific measurements to record all the definitely known digits from the measurements, and to estimate at least one additional digit if there is any information at all available to enable the observer to make an estimate. The resulting number contains more information than it would without that extra digit(s), and it (or they) may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).

Additional information about precision can be conveyed through additional notations. It is often useful to know how exact the final digit(s) are. For instance, the accepted value of the unit of elementary charge can properly be expressed as 6981160217662079999♠1.6021766208(98)×10−19C,[2] which is shorthand for 6981160217662079999♠(1.6021766208±0.0000000098)×10−19 C.

Most calculators and many computer programs present very large and very small results in scientific notation, typically invoked by a key labelled EXP (for exponent), EEX (for enter exponent), EE, EX, E, or ×10x depending on vendor and model. Because superscripted exponents like 107 cannot always be conveniently displayed, the letter E (or e) is often used to represent "times ten raised to the power of" (which would be written as "× 10n") and is followed by the value of the exponent; in other words, for any two real numbers m and n, the usage of "mEn" would indicate a value of m × 10n. In this usage the character e is not related to the mathematical constant e or the exponential functionex (a confusion that is unlikely if scientific notation is represented by a capital E). Although the E stands for exponent, the notation is usually referred to as (scientific) E-notation rather than (scientific) exponential notation. The use of E-notation facilitates data entry and readability in textual communication since it minimizes keystrokes, avoids reduced font sizes and provides a simpler and more concise display, but it is not encouraged in some publications.[3]

After the introduction of the first pocket calculators supporting scientific notation in 1972 (HP-35, SR-10) the term decapower was sometimes used in the emerging user communities for the power-of-ten multiplier in order to better distinguish it from "normal" exponents. Likewise, the letter "D" was used in typewritten numbers. This notation was proposed by Jim Davidson and published in the January 1976 issue of Richard J. Nelson's Hewlett-Packard newsletter 65 Notes[5] for HP-65 users, and it was adopted and carried over into the Texas Instruments community by Richard C. Vanderburgh, the editor of the 52-Notes newsletter for SR-52 users in November 1976.[6]

The ALGOL 60 (1960) programming language uses a subscript ten "10" character instead of the letter E, for example: 6.0221023.[14][15]

The use of the "10" in the various Algol standards provided a challenge on some computer systems that did not provide such a "10" character. As a consequence Stanford UniversityAlgol-W required the use of a single quote, e.g. 6.02486'+23,[16] and some Soviet Algol variants allowed the use of the Cyrillic character "ю" character, e.g. 6.022ю+23.

Scientific notation also enables simpler order-of-magnitude comparisons. A proton's mass is 6973167260000000000♠0.0000000000000000000000000016726 kg. If written as 6973167260000000000♠1.6726×10−27 kg, it is easier to compare this mass with that of an electron, given below. The order of magnitude of the ratio of the masses can be obtained by comparing the exponents instead of the more error-prone task of counting the leading zeros. In this case, −27 is larger than −31 and therefore the proton is roughly four orders of magnitude (7004100000000000000♠10,000 times) more massive than the electron.

Scientific notation also avoids misunderstandings due to regional differences in certain quantifiers, such as billion, which might indicate either 109 or 1012.

In physics and astrophysics, the number of orders of magnitude between two numbers is sometimes referred to as "dex", a contraction of "decimal exponent". For instance, if two numbers are within 1 dex of each other, then the ratio of the larger to the smaller number is less than 10. Fractional values can be used, so if within 0.5 dex, the ratio is less than 100.5, and so on.

In normalized scientific notation, in E-notation, and in engineering notation, the space (which in typesetting may be represented by a normal width space or a thin space) that is allowed only before and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.[21]

An electron's mass is about 0.000000000000000000000000000000910938356 kg.[22] In scientific notation, this is written 6969910938355999999♠9.10938356×10−31 kg (in SI units).

The Earth's mass is about 5972400000000000000000000 kg.[23] In scientific notation, this is written 7024597240000000000♠5.9724×1024 kg.

The Earth's circumference is approximately 40000000 m.[24] In scientific notation, this is 7007400000000000000♠4×107 m. In engineering notation, this is written 7007400000000000000♠40×106 m. In SI writing style, this may be written 7007400000000000000♠40 Mm"(40 megameters).

An inch is defined as exactly 25.4 mm. Quoting a value of 25.400 mm shows that the value is correct to the nearest micrometer. An approximated value with only two significant digits would be 6998250000000000000♠2.5×101 mm instead. As there is no limit to the number of significant digits, the length of an inch could, if required, be written as (say) 6998254000000000000♠2.54000000000×101 mm instead.

Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.

First, move the decimal separator point sufficient places, n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append "× 10n"; to the right, "× 10−n". To represent the number 7006123040000000000♠1,230,400 in normalized scientific notation, the decimal separator would be moved 6 digits to the left and "× 106" appended, resulting in 7006123040000000000♠1.2304×106. The number 3002596790000000000♠−0.0040321 would have its decimal separator shifted 3 digits to the right instead of the left and yield 3002596790000000000♠−4.0321×10−3 as a result.

Converting a number from scientific notation to decimal notation, first remove the × 10n on the end, then shift the decimal separator n digits to the right (positive n) or left (negative n). The number 7006123040000000000♠1.2304×106 would have its decimal separator shifted 6 digits to the right and become 7006123040000000000♠1,230,400, while 3002596790000000000♠−4.0321×10−3 would have its decimal separator moved 3 digits to the left and be 3002596790000000000♠−0.0040321.

Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shifted x places to the left (or right) and x is added to (or subtracted from) the exponent, as shown below.

While base ten is normally used for scientific notation, powers of other bases can be used too,[25] base 2 being the next most commonly used one.

For example, in base-2 scientific notation, the number 1001b in binary (=9d) is written as
1.001b × 2d11b or 1.001b × 10b11b using binary numbers (or shorter 1.001 × 1011 if binary context is obvious). In E-notation, this is written as 1.001bE11b (or shorter: 1.001E11) with the letter E now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter B instead of E,[26] a shorthand notation originally proposed by Bruce Alan Martin of Brookhaven National Laboratory in 1968,[27] as in 1.001bB11b (or shorter: 1.001B11). For comparison, the same number in decimal representation: 1.125 × 23 (using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes 1.001b × 10b3d or shorter 1.001B3.[26]

This is closely related to the base-2 floating-point representation commonly used in computer arithmetic, and the usage of IEC binary prefixes (e.g. 1B10 for 1×210 (kibi), 1B20 for 1×220 (mebi), 1B30 for 1×230 (gibi), 1B40 for 1×240 (tebi)).

Similar to B (or b[28]), the letters H[26] (or h[28]) and O[26] (or o,[28] or C[26]) are sometimes also used to indicate times 16 or 8 to the power as in 1.25 = 1.40h × 10h0h = 1.40H0 = 1.40h0, or 98000 = 2.7732o × 10o5o = 2.7732o5 = 2.7732C5.[26]

Another similar convention to denote base-2 exponents is using a letter P (or p, for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal.[29] This notation can be produced by implementations of the printf family of functions following the C99 specification and (Single Unix Specification) IEEE Std 1003.1POSIX standard, when using the %a or %A conversion specifiers.[29][30][31] Starting with C++11, C++ I/O functions could parse and print the P-notation as well. Meanwhile, the notation has been fully adopted by the language standard since C++17.[32]Apple's Swift supports it as well.[33] It is also required by the IEEE 754-2008 binary floating-point standard. Example: 1.3DEp42 represents 1.3DEh × 242.

^Vanderburgh, Richard C., ed. (November 1976). "Decapower"(PDF). 52-Notes - Newsletter of the SR-52 Users Club. 1 (6): 1. V1N6P1. Archived(PDF) from the original on 2017-05-28. Retrieved 2017-05-28. Decapower - In the January 1976 issue of 65-Notes (V3N1p4) Jim Davidson (HP-65 Users Club member #547) suggested the term "decapower" as a descriptor for the power-of-ten multiplier used in scientific notation displays. I'm going to begin using it in place of "exponent" which is technically incorrect, and the letter D to separate the "mantissa" from the decapower for typewritten numbers, as Jim also suggests. For example, 123−45 [sic] which is displayed in scientific notation as 1.23 -43 will now be written 1.23D-43. Perhaps, as this notation gets more and more usage, the calculator manufacturers will change their keyboard abbreviations. HP's EEX and TI's EE could be changed to ED (for enter decapower).[1]Archived 2014-08-03 at the Wayback Machine (NB. The term decapower was frequently used in subsequent issues of this newsletter up to at least 1978.)

^"floating point literal". cppreference.com. Archived from the original on 2017-04-29. Retrieved 2017-03-11. The hexadecimal floating-point literals were not part of C++ until C++17, although they can be parsed and printed by the I/O functions since C++11: both C++ I/O streams when std::hexfloat is enabled and the C I/O streams: std::printf, std::scanf, etc. See std::strtof for the format description.