Introduction: Logarithms are usually computed by using a taylor series or another iterative algorithm. This takes up valuable clock cycles or hardware. Such algorithms are also usually limited to a specific logarithm base. This trick takes advantage of the normalization and the exponent in floating point representations.

The Trick: Floating point representation of a number separates the number into an exponent (usually in excess notation), a sign and a significand, also known as a mantissa. The number represented is then N=M*2^E.

The log (in whatever base is convenient) of that number is LOG(N)=LOG(M * 2^E)=LOG(M) + E*LOG(2). If we assume M is normalized, 1<=M<2, then 0 <=LOG(M) <=LOG(2). The LOG of the exponent gives us the coarse estimate of the LOG (to 6dB), which can be found by just multiplying the exponent by a constant (LOG(2) in the appropriate base). For IEEE floating point, a constant offset may need to be added to the exponent to correct for the excess 128 notation.

If finer precision is required, and here’s the real trick, then we can obtain an estimate of the logarithm of the mantissa and add that to the logarithm of the exponent to get a more accurate estimate. The logarithm of the mantissa can be found using a small look-up table addressed by the most significant bits of M (the most significant bit is always 1 if M is normalized, so that bit need not be used in the address). The more bits of M used, the larger the table, and the more accurate the logarithm. A single bit from M will get you close to 3dB, while a 5 bit address (32 entry table) will get close to 1/4 dB accuracy, which is usually all that is needed.

This trick works well in hardware (FPGA) implementations, since the table size is kept small enough to be implemented in one logic cell. For hardware implementations, it may be convenient to use the exponent to address a table instead of using a multiplier.