Two mathematicians have just created the most efficient algorithm for multiplication of very large numbers, as reported in an article in the April 2019 Quanta magazine. The two, Joris van der Hoeven of the French National Center for Scientific Research and David Harvey of Australia's University of New South Wales created an algorithm that beat the previous record-holder Martin Fürer's algorithm created in 2007.

In third or fourth grade all of us were taught that the way to multiply numbers of more than a single digit was to stack the numbers on top of one another and then multiply the digits in the ones column, the tens column, the hundreds column, and then do all of the adding to get the answer.

The stacking or carry over method is not unique to the U.S. classroom. It has been standard across civilizations and ages. Babylonian school children of four thousand years ago would feel right at home in an American classroom when the kids are being taught the carry over method. It is also quite efficient when used to multiply classroom worksheet kinds of problems: one digit times two digits, two digits times two digits. A two digit times two digit problem requires four multiplications; a three digit times three digits requires nine.

However, once the problems become more practical, stacking and multiplying each digit in the top row by each in the bottom becomes quite onerous when the numbers to be multiplied have 100 digits, there will be 10,000 multiplications!

In science and engineering big numbers are everywhere. The mathematical constants such as the speed of light are both important since the speed of light "allows you to describe all kinds of phenomena," according to Joris van der Hoeven. The stack and multiply method, even when executed by fast modern computers encounters a speed bump when multiplying "millions or billions of digits" as when computers are asked to "accurately calculate pi or as part of the worldwide search for primes..."

To multiply one 1-billion digit number by another requires 1 billion squared multiplications which would take even a modern computer 30 years.

Mathematicians began searching for a better way to do this kind of multiplications in the 1960s. The better way would be an algorithm, a procedure to accomplish a specific task; that is, "to solve a general, well-specified problem." (Skiena, 2008)

The well-defined problem, according to David Harvey the other co-author of the new algorithm, is to "turn some of the multiplications into additions, and the idea is additions will be faster for computers."

The first improved algorithm was created in 1971 but its authors Arnold Schönhage and Volker Strassen also "conjectured that there should be an even faster algorithm than the one they found." You can read about this algorithm here."

The prediction came true when Martin Fürer, a Penn State mathematician produced a faster one in 2007. You can read more about this algorithm here.

The Quanta article includes an example of an algorithm that was created by Anatoly Karatsuba, a student of Andrey Kologorov and spent a week trying to prove his teacher wrong. The Karatsuba algorithm betters the n^2 step limit proposed by his teacher with n^1.58.

Here it is a 2-digit by 2-digit problem solved with the Karatsuba algorithm.2563

Karatsuba has replaced multiplication that requires n^2 steps with addition which only requires 2n steps. When used with two 1024-digit numbers, Karatsuba requires 2^10 multiplications while the classical algorithm requires (2^10)^2 steps.

As Martin Furer is quoted in the Quanta article “With addition, you do it a year earlier in school because it’s much easier, you can do it in linear time, almost as fast as reading the numbers from right to left.”