# Quicksort: the history and implementations

###
snj
*Updated on *
ă»5 min read

The post Quicksort: the history and implementations appeared first on CodersCat.

Quicksort is a classic and elegant **divide-and-conquer** algorithm, which should be explained by every algorithms textbook, and be learned by every programmer.

But actually, very few programmers can finish a bug-free implementation without search on the Internet. If you donât believe it, please close your screen and try it now.

Donât be frustrated.

In this post, I will try to use quicksort as an example, explain my method for learning algorithms effectively, explore the differences between imperative and functional programming styles.

## A little history and the basics

Quicksort was invented by Tony Hoare in 1959 and published in 1961. When Tony Hoare first developed quicksort, he thought it was too simple to publish, and only wrote his classic âQuicksortâ paper after he was able to analyze its expected runtime.

And Algol60 is the first compiler to support recursion, which helps Tony Hoare to publish the first version of code.

*Tony Hoare tells the story of how he came up with the idea for the Quicksort computer sorting algorithm whilst in 1960 Moscow.*

Generally, the basic steps for quicksort are:

- If only one or no element is there, then we are done.
- Every time we select a pivot element, e.g. the last element.
- We compare elements with the pivot, put the smaller elements on its left-hand side and larger ones on its right-hand side, then fix the pivot into proper position
- Quicksort all elements on its left, quicksort all elements on its right.

Quicksort is a recursive algorithm. The complexity is O(NlogN)

on average, and the worst case is O(N^{2)}

with optimization such as randomly choose the pivot, the worst case can be avoided in practice.

## How to learn classic algorithms

Itâs really a good chance to sharpen the coding logic and ability, learn about algorithms design from studying such kind of classic algorithms. For my experience, studying algorithms should not try to remember the steps and details of algorithms, but try to reinvent, reason, reimplement the algorithm by ourself. Why? because if you didnât understand how it works, you could not remember the details, you will forget it some days later.

My recommended steps for learning classic algorithms are:

The most important step is: **implement the algorithms by myself**(donât copy code). First, choose to implement the naive one without any optimization, then make it clear and clean, the final step is making it faster. Remember the textbook is a reference, we should convert the materials in the book into something we understand totally by ourselves.

## Imperative and Functional styles

Most textbooks describe quicksort in an imperative programming style. For example, an implementation in Ruby should like this, any procedural programming languagesâ implementation should be similar to it:

```
def partition(arr, low, high)
pos = rand(low..high) # random choose pivot
arr[pos], arr[high] = arr[high], arr[pos]
sep = low # index from the low position
(low..high - 1).each do |j|
if arr[j] <= arr[high]
arr[sep], arr[j] = arr[j], arr[sep]
sep += 1
end
end
arr[sep], arr[high] = arr[high], arr[sep]
sep
end
def quick_sort(arr, low, high)
if low < high
pi = partition(arr, low, high)
quick_sort(arr, low, pi - 1)
quick_sort(arr, pi + 1, high)
end
end
```

As quicksort is a recursive algorithm, itâs more natural to implement it in a functional style. I was shocked when I first saw one line quicksort in Ruby like this:

```
def qsort(a)
(x=a.pop) ? qsort(a.select { |i| i <= x }) + [x] + qsort(a.select { |i| i > x }) : []
end
```

a.select filter the elements smaller or bigger than the pivot, then we construct a new array with [pivot]. If you know the programming language with pattern match, like Haskell, it will be more elegant:

```
qsort n = case n of
[]->[]
(x:xs)-> qsort[a|a<-xs, a<=x] ++ [x] ++ qsort[a|a<-xs, a>x]
```

One may say imperative programming is faster as we do not need to constantly allocate memory for new things. Actually most modern programming language with good functional support currently has many optimizations in the compiler(or interpreters). Letâs do some benchmark with Ruby two versions, with 100000 random number(so worst cases will not happen):

You see, the functional version even performs even better than the imperative version. I have seen many algorithms implemented simpler and elegant by functional styles, and data structures also could be described in pure functional programming, referring to

## Other optimizations

Did you see the above benchmark results? How quick the builtin sort implementation in Ruby!

We should dig out how itâs implemented. In Rubyâs repo, we could find the code is very long Rubyâs Qsort, itâs almost 150 lines of C code, itâs almost similar to GNUâs version. Software engineering is a trade-off in most cases, multiple optimizations on code will reduce the readability of code.

TypeOcaml showed the âImmutableâ of functional programming with quicksort as an example.

There are also some variants, Kushagra-Ortiz-Qiao-Munro is Three-pivot quicksort, and Algorithms, Robert Sedgewick and Kevin Wayne describe very well with some fantastic visual images on quicksort.

Check out the Dual-Pivot Quicksort by Vladimir Yaroslavisky. It's quite an excellent improvement.