Okay, sorry for the clickbait-y title.

I not-so-recently starred on the 6th episode of the White Van podcast with Professor Parker Glynn-Adey, about math.

To be honest, I was the least qualified person on the panel, but Prof Parker brought up the “most important algorithm”.

The Euclidean algorithm!

Algorithms: an introduction

First things first, what is an algorithm?

I think a really good definition is:

A set of steps/instructions to be carried out by an object with a set of abilities (usually being on a computer).

There are tons of these in our day-to-day lives. Your daily routine (in a sense), is an algorithm, that you carry out. Recipes? Algorithms to be carried out by chefs. Other things, like how you multiply two numbers are also forms of algorithms.

So what makes the Euclidean algorithm the most important?

Euclidean algorithm: an introduction

To really explain why it’s the most important (or at least, one of the most important ones), a lot needs to be explained first.

First of all, most, if not all devices use the Euclidean algorithm, and it probably exists in most encryption systems to find the greatest common divisor of two numbers.

This is really useful to find the multiplicative inverse of numbers in $\mathbb{Z}_p$, aka the integers modulo $p$ (where $p$ is a prime). I won’t go into detail, but finding this number is very helpful in cryptographic systems (which allow you to buy all that clothing online)!

The process behind it

It’s actually one of the simplest algorithms to understand.

Here’s some pseudo-code:

function gcd(a, b)
    while b  0
        t := b
        b := a mod b
        a := t
    return a

Simple, isn’t it? The applications used in computers are actually more efficient, changing some of the divisions into bit-shifts and other operations that are less computationally expensive.

As simple as it is, don’t let it deceive you. Donald Knuth, the “father” of algorithm analysis, once called the Euclidean algorithm as the granddaddy of all algorithms.

Computational Complexity

In one of my original blog posts about Fibonacci numbers, I mention erroneously that the Euclidean algorithm uses recursion (my bad! it only uses iteration, but one can implement it with tail recursion). I also mention analysis of algorithms using runtime in terms of $n$, the input.

Well, for the worst case of the Euclidean algorithm, it’s when the inputs are consecutive Fibonacci numbers!

The reason why, is when you take the bigger of the two Fibonacci numbers and mod it by the other one, you get the Fibonacci number two terms before!

Another way of thinking about it is, let’s say you want $a$ and $b$ to be inputs that take at least $n$ steps to perform the algorithm. You can prove that the minimum numbers for them to be are $\text{Fib}_{n+1}$ and $\text{Fib}_{n+2}$.

Guess the Fibonacci numbers strike again!

Note: The proof for this is actually considered the beginning of computational complexity theory!