This section presents several algorithms in the search for an efficient algorithm for finding the greatest common divisor of two integers.

The greatest common divisor (GCD) of two integers is the largest number that can evenly divide both integers. GreatestCommonDivisor.java, presented a brute-force algorithm for finding the greatest common divisor of two integers **m** and **n**. *Brute force* refers to an algorithmic approach that solves a problem in the simplest or most direct or obvious way. As a result, such an algorithm can end up doing far more work to solve a given problem than a cleverer or more sophisticated algorithm might do. On the other hand, a brute-force algorithm is often easier to implement than a more sophisticated one and, because of this simplicity, sometimes it can be more efficient.

The brute-force algorithm checks whether **k** (for **k** = **2**, **3**, **4**, and so on) is a common divisor for **n1** and **n2**, until **k** is greater than **n1** or **n2**. The algorithm can be described as follows:

`public static int gcd(int m, int n) {`

int gcd = 1;

for (int k = 2; k <= m && k <= n; k++) {

if (m % k == 0 && n % k == 0)

gcd = k;

}

return gcd;

}

Assuming *m >= n*, the complexity of this algorithm is obviously O(n).

Is there a better algorithm for finding the GCD? Rather than searching a possible divisor from **1** up, it is more efficient to search from **n** down. Once a divisor is found, the divisor is the GCD. Therefore, you can improve the algorithm using the following loop:

`for (int k = n; k >= 1; k--) {`

if (m % k == 0 && n % k == 0) {

gcd = k;

break;

}

}

This algorithm is better than the preceding one, but its worst-case time complexity is still O(n).

A divisor for a number **n** cannot be greater than **n / 2**, so you can further improve the algorithm using the following loop:

`for (int k = m / 2; k >= 1; k--) {`

if (m % k == 0 && n % k == 0) {

gcd = k;

break;

}

}

However, this algorithm is incorrect, because **n** can be a divisor for **m**. This case must be considered. The correct algorithm is shown in the code below.

`Enter first integer: 2525`

Enter second integer: 125

The greatest common divisor for 2525 and 125 is 25

`Enter first integer: 3`

Enter second integer: 3

The greatest common divisor for 3 and 3 is 3

Assuming *m >= n*, the **for** loop is executed at most n/2 times, which cuts the time by half from the previous algorithm. The time complexity of this algorithm is still O(n), but practically, it is much faster than the algorithm in GreatestCommonDivisor.java.

The Big O notation provides a good theoretical estimate of algorithm efficiency. However, two algorithms of the same time complexity are not necessarily equally efficient. As shown in the preceding example, both algorithms in GreatestCommonDivisor.java and GCD.java have the same complexity, but in practice the one in GCD.java3 is obviously better.

A more efficient algorithm for finding the GCD was discovered by Euclid around 300 b.c. This is one of the oldest known algorithms. It can be defined recursively as follows:

Let **gcd(m, n)** denote the GCD for integers **m** and **n**:

- If
**m % n**is**0**,**gcd (m, n)**is**n**. - Otherwise,
**gcd(m, n)**is**gcd(n, m % n)**.

It is not difficult to prove the correctness of this algorithm. Suppose **m % n = r**. Thus, **m = qn + r**, where **q** is the quotient of **m / n**. Any number that is divisible by **m** and **n** must also be divisible by **r**. Therefore, **gcd(m, n)** is the same as **gcd(n, r)**, where **r = m % n**. The algorithm can be implemented as in the code below.

In the best case when **m % n** is **0**, the algorithm takes just one step to find the GCD. It is difficult to analyze the average case. However, we can prove that the worst-case time complexity is O(log n).

Assuming *m >= n*, we can show that **m % n < m / 2**, as follows:

- If
**n <= m / 2**,**m % n < m / 2**, since the remainder of*m*divided by*n*is always less than n. - If
**n > m / 2**,**m % n = m – n < m / 2**. Therefore,**m % n < m / 2**.

Euclid’s algorithm recursively invokes the **gcd** method. It first calls **gcd(m, n)**, then calls **gcd(n, m % n)**, and **gcd(m % n, n % (m % n))**, and so on, as follows:

`gcd(m, n)`

= gcd(n, m % n)

= gcd(m % n, n % (m % n))

= ...

Since **m % n < m / 2** and **n % (m % n) < n / 2**, the argument passed to the **gcd** method is reduced by half after every two iterations. After invoking **gcd** two times, the second parameter is less than n/2. After invoking **gcd** four times, the second parameter is less than n/4. After invoking **gcd** six times, the second parameter is less than n/2^3. Let *k* be the number of times the **gcd** method is invoked. After invoking **gcd** *k* times, the second parameter is less than n/2^(k/2), which is greater than or equal to 1. That is,

Therefore, k <= 2 logn. So the time complexity of the **gcd** method is O(logn).

The worst case occurs when the two numbers result in the most divisions. It turns out that two successive Fibonacci numbers will result in the most divisions. Recall that the Fibonacci series begins with 0 and 1, and each subsequent number is the sum of the preceding two numbers in the series, such as:

`0 1 1 2 3 5 8 13 21 34 55 89 . . .`

The series can be recursively defined as

`fib(0) = 0;`

fib(1) = 1;

fib(index) = fib(index - 2) + fib(index - 1); index >= 2

For two successive Fibonacci numbers **fib(index)** and **fib(index - 1)**,

`gcd(fib(index), fib(index - 1))`

= gcd(fib(index - 1), fib(index - 2))

= gcd(fib(index - 2), fib(index - 3))

= gcd(fib(index - 3), fib(index - 4))

= ...

= gcd(fib(2), fib(1))

= 1

For example,

`gcd(21, 13)`

= gcd(13, 8)

= gcd(8, 5)

= gcd(5, 3)

= gcd(3, 2)

= gcd(2, 1)

= 1

Therefore, the number of times the **gcd** method is invoked is the same as the index. We can prove that index <= 1.44 logn, where n = fib(index - 1). This is a tighter bound than index <= 2 logn.

Table below summarizes the complexity of three algorithms for finding the GCD.

## Top comments (0)