The Euclid's algorithm (or Euclidean Algorithm) is a method for efficiently finding the greatest common divisor (GCD) of two numbers. The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's Elements (c. 300 BC).

The GCD of two integers X and Y is the largest integer that divides both of X and Y (without leaving a remainder). Greatest Common Divisor is, also, known as greatest common factor (gcf), highest common factor (hcf), greatest common measure (gcm) and highest common divisor.

Key Idea: Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number.

Example: GCD(20, 15) = GCD(15, 10) = 5

Algorithm

The Euclidean Algorithm for calculating GCD of two numbers A and B can be given as follows:

If A=0 then GCD(A, B)=B since the Greatest Common Divisor of 0 and B is B.

If B=0 then GCD(a,b)=a since the Greates Common Divisor of 0 and a is a.

Complexity

Let $T(a,b)$ be the number of steps taken in the Euclidean algorithm, which repeatedly evaluates $\gcd(a,b)=\gcd(b,a\bmod b)$ until $b=0$, assuming $a\geq b$.

Let $h=\log_{10}b$ be the number of digits in $b$ . (assuming the time-complexity of the $\mathrm{mod}$ function to be $O(1)$.

Worst Case scenario

$a=F_{n+1}$ and $b=F_n$, where $F_n$ is the Fibonacci sequence, since it will calculate $\gcd(F_{n+1},F_n)=\gcd(F_n,F_{n-1})$ until it gets to $n=0$, so $T(F_{n+1},F_n)=\Theta(n)$ and $T(a,F_n)=O(n)$. Since $F_n=\Theta(\varphi^n)$, this implies that $T(a,b)=O(\log_\varphi b)$. Note that $h\approx log_{10}b$ and $\log_bx={\log x\over\log b}$ implies $\log_bx=O(\log x)$ for any $a$, so the worst case for Euclid's algorithm is $O(\log_\varphi b)=O(h)=O(\log b)$.

Average case scenario

If $a$ is fixed and $b$ is chosen uniformly from $\mathbb Z\cap[0,a)$, then the number of steps $T(a)$ is

$a=b$ or $b=0$ or some other convenient case like that happens, so the algorithm terminates in a single step. Thus, $T(a,a)=O(1)$.

If we are working on a computer using 32-bit or 64-bit calculations, as is common, then the individual $\bmod$ operations are in fact constant-time, so these bounds are correct. If, however, we are doing arbitrary-precision calculations, then in order to estimate the actual time complexity of the algorithm, we need to use that $\bmod$ has time complexity $O(\log a\log b)$. In this case, all of the "work" is done in the first step, and the rest of the computation is also $O(\log a\log b)$, so the total is $O(\log a\log b)$. I want to stress, though, that this only applies if the number is that big that you need arbitrary-precision to calculate it.

Implementations

Following are the implementation of Euclidean Algorithm in 10 different languages such as Python, C, C++, Java, CSharp, Erlang, Go, JavaScript, PHP and Scala.