Insertion Sort is a sorting algorithm that operates in a way similar to the process by which many people sort cards in their hands during a game. It simply takes one element at a time and inserts it into the correct position within the already sorted list, moving all subsequent elements.

## How it works

The functioning of the algorithm can be summarized in the following steps:

- It starts from the second element (index 1 in zero-based array), considering the first element as already sorted.
- It takes the current element and compares it with all preceding elements in the array.
- If the current element is smaller than its predecessor, it swaps the two elements and continues to compare with all other preceding elements.
- This process continues until the current element is larger than its predecessor or until it reaches the beginning of the array, indicating that the current element is the smallest in the array up to that point.
- It moves on to the next element in the array and repeats the process until all the elements in the array have been sorted.

Here is a simple implementation in C#:

```
public void InsertionSort(int[] arr)
{
var len = arr.Length;
for (var i = 1; i < len; i++)
{
var key = arr[i];
var j = i - 1;
while (j >= 0 && arr[j] > key)
{
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}
```

## Time complexity

In the worst case and average case, the time complexity is O(n²), where n is the number of elements in the array. This means that the execution time increases quadratically with the size of the input. However, in the best case (when the array is already sorted), it has a time complexity of O(n).

## Space complexity

Being an in-place sorting algorithm, its space complexity is O(1), meaning that memory usage does not increase with input size. Simply put, it does not require additional space to sort the data.

## Conclusion

Insertion Sort is an excellent example of the variety of sorting algorithms. Although it may not excel in large data set situations because of its quadratic time complexity, its unique strengths certainly shine in particular scenarios.

First, its simplicity in terms of understanding and implementation makes it an excellent teaching tool for those new to the world of algorithms.

Second, when dealing with small data sets or near-sorted data sets, Insertion Sort can be surprisingly efficient and even outperform more complex algorithms.

Third, its stability-the property of preserving the relative order of equal elements-and its in-place nature are advantageous in certain situations.

Finally, sorting by insertion performs exceptionally well when data is received in real time. It can maintain an ordered list as new items are inserted, making it an excellent choice for scenarios where it is useful to have a continuously ordered view of the data.

## References

- Algoritmi e programmazione in C di Francesco Oliveri, Aracne editrice (Italian book)

## Top comments (5)

Interesting.

You mentioned that Insertion Sort can outperform some other algorithms with smaller data sets. Would you recommend actually using an Insertion Sort with smaller data sets? If so, is there a rule of thumb as to when a data set is too large to be suitable?

If you know a priori that you have to work on datasets that are small (e.g., an array of 10 or 20 elements) and partially sorted, then using Insertion Sort is a good solution.

There is no rule of thumb, but one must analyze on a case-by-case basis.

My advice, however, is to always use the sort function built into the framework you are using, since it is optimized for different scenarios. For example, in .NET the

`Array.Sort`

function uses an algorithm calledIntroSort, orIntrospective Sort, which is a combination of QuickSort, HeapSort and Insertion Sort and offers efficient performance for a wide range of scenarios.Thanks. Your advice echoes what I was once told when I studied algorithms: I should never try to re-invent the wheel when it comes to algorithms - there are people who are far smarter than me who tweak the library/framework implementations all day long; leave it to them to get the best performance.

Same thing for cryptographic algorithms. Use existing ones developed by people who specialise in them and have been battle-tested for vulnerabilities.

I do not entirely agree with what you have been told. You should not write algorithms that already exist, such as AES, for the simple reason that existing libraries are tested by millions of users, including cybersecurity experts. However, if you have a good idea of how to write a new algorithm that can improve on something that already exists, then I think you should strive to implement it 😉 Let me give you an example, if Jason A. Donenfeld had not invented WireGuard we would never have this fantastic VPN protocol and yet, OpenVPN was and is a superb protocol.

Good point - we shouldn't be afraid to research/try new algorithms if we know what we're doing.

As algorithms isn't really my field though, I think I'll leave it to the experts. That doesn't mean I won't hack away at other things though 😁