Anyone who has been through a CS school or into learning sorting algorithms should be well aware that today's most accepted algorithm is quicksort, since it boasts an average case of
O(n log n) run times. Some algorithms with much less efficiency, such as insertion sort, which incurs on average
O(n * n) time might be less favorable. However, there are many edge cases in which each may surprisingly shine brighter than the others.
The go-to sorting algorithm, quicksort's efficiency relies on two factors--its pivot or partitioned element and the entropy of the source array. If the array is partially or substantially sorted, and/or the pivot element is chosen in a way that it is closer to the start or end of the array, it can easily degrade to
O(n * n).
Although insertion sort seems slow, when used on a small array of data, is quite efficient. Its
O(n * n) quadratic run time can upgrade to a respectable
O(n * k), where
k is the steps it needs to tread back up the array to swap the previous element with the current one. Also, if the array is partially or already substantially sorted, insertion sort will shine even brighter than quick sort, which will perform quite poorly in such case. The reason being the mentioned
k is very little because it does not need to go back up the array often.
A pretty simple algorithm with a pretty neat
O(1) constant space complexity. Although its run time is quadratic, it has a magical property of retaining the first N-sorted elements when run N times on an array. What does this mean? It means that if your sorted data is to be paginated and ordered i.e. search results, you only need to run selection sort algorithm that many times. Once you need the next set of results, you can resume where you left off and so on.