For example, the binary logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is 2, the binary logarithm of 8 is 3, the binary logarithm of 16 is 4 and the binary logarithm of 32 is 5.

A table of powers of two published by Michael Stifel in 1544 can also be interpreted (by reversing its rows) as being a table of binary logarithms.[1][2] The application of binary logarithms to music theory was established by Leonhard Euler in 1739, long before information theory and computer science became disciplines of study. As part of his work in this area, Euler included a table of binary logarithms of the integers from 1 to 8, to seven decimal digits of accuracy.[3][4]

In mathematics, the binary logarithm of a number n is written as log2n. However, several other notations for this function have been used or proposed, especially in application areas.

Some authors write the binary logarithm as lg n.[5][6]Donald Knuth credits this notation to a suggestion of Edward Reingold,[7] but its use in both information theory and computer science dates to before Reingold was active.[8][9] The binary logarithm has also been written as log n, with a prior statement that the default base for the logarithm is 2.[10][11][12]

Another notation that is sometimes used for the same function (especially in the German language) is ld n, from Latinlogarithmusduālis.[13] The ISO 31-11 and ISO 80000-2 specifications recommend yet another notation, lb n; in this specification, lg n is instead reserved for log10n. However, the ISO notation has not come into common use.

In information theory, the definition of the amount of self-information and information entropy is often expressed with the binary logarithm, corresponding to making the bit be the fundamental unit of information. However, the natural logarithm and the nat are also used in alternative notations for these definitions.[14]

According to Ramsey's theorem, every n-vertex undirected graph has either a clique or an independent set of size logarithmic in n. The precise size that can be guaranteed is not known, but the best bounds known on its size involve binary logarithms. In particular, all graphs have a clique or independent set of size at least and almost all graphs do not have a clique or independent set of size larger than .[18]

The binary logarithm also frequently appears in the analysis of algorithms,[12] not only because of the frequent use of binary number arithmetic in algorithms, but also because binary logarithms occur in the analysis of algorithms based on two-way branching.[7] If a problem initially has n choices for its solution, and each iteration of the algorithm reduces the number of choices by a factor of two, then the number of iterations needed to select a single choice is again the integral part of log2n. This idea is used in the analysis of several algorithms and data structures. For example, in binary search, the size of the problem to be solved is halved with each iteration, and therefore roughly log2n iterations are needed to obtain a problem of size 1, which is solved easily in constant time. Similarly, a perfectly balanced binary search tree containing n elements has height log2n + 1.

However, the running time of an algorithm is usually expressed in big O notation, ignoring constant factors. Since log2n = (logkn)/(logk 2), where k can be any number greater than 1, algorithms that run in O(log2n) time can also be said to run in, say, O(log13n) time. The base of the logarithm in expressions such as O(log n) or O(n log n) is therefore not important.[5] In other contexts, though, the base of the logarithm needs to be specified. For example O(2log2n) is not the same as O(2ln n) because the former is equal to O(n) and the latter to O(n0.6931...).

Algorithms with running time O(n log n) are sometimes called linearithmic.[19] Some examples of algorithms with running time O(log n) or O(n log n) are:

A microarray of expression data for approximately 8700 genes. The relative expression rates of these genes are represented using binary logarithms.

In the analysis of microarray data in bioinformatics, expression rates of genes are often compared by using the binary logarithm of the ratio of expression rates. By using base 2 for the logarithm, a doubled expression rate can be described by a log ratio of 1, a halved expression rate can be described by a log ratio of −1, and an unchanged expression rate can be described by a log ratio of zero, for instance.[25] Data points obtained in this way are often visualized as a scatterplot in which one or both of the coordinate axes are binary logarithms of intensity ratios, or in visualizations such as the MA plot and RA plot which rotate and scale these log ratio scatterplots.[26]

In music theory, the interval or perceptual difference between two tones is determined by the ratio of their frequencies. Intervals coming from rational number ratios with small numerators and denominators are perceived as particularly euphonius. The simplest and most important of these intervals is the octave, a frequency ratio of 2:1. The number of octaves by which two tones differ is the binary logarithm of their frequency ratio.[27]

In order to study tuning systems and other aspects of music theory requiring finer distinctions between tones, it is helpful to have a measure of the size of an interval that is finer than an octave and is additive (as logarithms are) rather than multiplicative (as frequency ratios are). That is, if tones x, y, and z form a rising sequence of tones, then the measure of the interval from x to y plus the measure of the interval from y to z should equal the measure of the interval from x to z. Such a measure is given by the cent, which divides the octave into 1200 equal intervals (12 semitones of 100 cents each). Mathematically, given tones with frequencies f1 and f2, the number of cents in the interval from x to y is[27]

The millioctave is defined in the same way, but with a multiplier of 1000 instead of 1200.

In competitive games and sports involving two players or teams in each game or match, the binary logarithm indicates the number of rounds necessary in a single-elimination tournament in order to determine a winner. For example, a tournament of 4 players requires log2(4) = 2 rounds to determine the winner, a tournament of 32 teams requires log2(32) = 5 rounds, etc. In this case, for n players/teams where n is not a power of 2, log2n is rounded up since it will be necessary to have at least one round in which not all remaining competitors play. For example, log2(6) is approximately 2.585, rounded up, indicates that a tournament of 6 requires 3 rounds (either 2 teams will sit out the first round, or one team will sit out the second round). The same number of rounds is also necessary to determine a clear winner in a Swiss-system tournament.[28]

In photography, exposure values are measured in terms of the binary logarithm of the amount of light reaching the film or sensor, in accordance with the Weber–Fechner law describing a logarithmic response of the human visual system to light. A single stop of exposure is one unit on a base-2 logarithmic scale.[29][30] More precisely, the exposure value of a photograph is defined as

where is the f-number measuring the aperture of the lens during the exposure, and is the number of seconds of exposure.

Binary logarithms (expressed as stops) are also used in densitometry, to express the dynamic range of light-sensitive materials or digital sensors.[31]

The integer binary logarithm can be interpreted as the zero-based index of the most significant 1 bit in the input. In this sense it is the complement of the find first set operation, which finds the index of the least significant 1 bit. Many hardware platforms include support for finding the number of leading zeros, or equivalent operations, which can be used to quickly find the binary logarithm; see find first set for details. The fls and flsl functions in the Linux kernel[34] and in some versions of the libc software library also compute the binary logarithm (rounded up to an integer, plus one).

Computing the integral part is straightforward. For any x > 0, there exists a unique integer n such that 2n ≤ x < 2n+1, or equivalently 1 ≤ 2−nx < 2. Now the integer part of the logarithm is simply n, and the fractional part is log2(2−nx).[35] In other words:

The fractional part of the result is , and can be computed recursively, using only elementary multiplication and division.[35] To compute the fractional part:

Start with a real number . If , then we are done and the fractional part is zero.

Otherwise, square repeatedly until the result is . Let be the number of squarings needed. That is, with chosen such that .

Taking the logarithm of both sides and doing some algebra:

Once again is a real number in the interval . Return to step 1, and compute the binary logarithm of using the same method recursively.

The result of this is expressed by the following formulas, in which is the number of squarings required in the i-th recursion of the algorithm:

In the special case where the fractional part in step 1 is found to be zero, this is a finite sequence terminating at some point. Otherwise, it is an infinite series which converges according to the ratio test, since each term is strictly less than the previous one (since every ). For practical use, this infinite series must be truncated to reach an approximate result. If the series is truncated after the i-th term, then the error in the result is less than .

An alternative algorithm that computes a single bit of the output in each iteration, using a sequence of shift and comparison operations to determine which bit to output, is also possible.[36]

^ abGoodrich, Michael T.; Tamassia, Roberto (2002), Algorithm Design: Foundations, Analysis, and Internet Examples, John Wiley & Sons, p. 23, One of the interesting and sometimes even surprising aspects of the analysis of data structures and algorithms is the ubiquitous presence of logarithms ... As is the custom in the computing literature, we omit writing the base b of the logarithm when b = 2.