DEV Community

Ben
Ben

Posted on • Updated on • Originally published at benbarten.com

Go Big O

Software development has become widely accessible in today’s world. Besides the traditional academic way,
there are endless ways to study the concepts of software development.
Getting acquainted with modern software development frameworks is just as easy as watching an online course on platforms like Udemy or Coursera. As long as you have a computer with internet connection at hand, you can teach yourself software development with basically no money. So did I.

What is widely missed in these courses and often also underrated in the workforce are the basic concepts of computer science.
This post aims at getting started with these concepts, explicitly the Big O notation, which every developer should not only be familiar with but master it, if he or she wants to do a good job. Furthermore, having a solid knowledge of Big O will be fundamental to survive a technical job interview. Since I am currently learning Go as a new programming language, all examples will be provided in Go. You can find all examples and algorithms in this GitHub repository. The algorithms are implemented by me, so there will probably be more efficient implementations. Any feedback is welcome!

What is Big O?

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. (Source: Wikipedia)

In other words, with Big O we are looking for a generalised way to describe how well an algorithm will perform when maximising its input. Therefore, we are not looking for the exact number of executed lines of codes but rather for the growth rate of the function in respect to executed lines of code. This can be in a matter of time as well as space/memory.
We will have a closer look at time and space complexity within this article.

An Example

To get a better sense of how Big O is calculated, let’s have a look at the classic (not so efficient) selection sort algorithm.
Here is a simple implementation in Go:

func SelectionSort(list []int32) []int32 {
   for i := 0; i < len(list); i++ {
      for j := i; j < len(list); j++ {
         if list[j] < list[i] {
            swap := list[i]
            list[i] = list[j]
            list[j] = swap
         }
      }
   }
   return list
}
Enter fullscreen mode Exit fullscreen mode

We iterate over every element in the list and compare its value by iterating over every element after the given one.
Now, let’s call the size of the list n. First, we iterate n times over the list, then the inner loop iterates over the following elements n-1 times. This leaves us with n * (n-1) which is equal to (n^2 )/2 - n/2. In Big O notation this can be reduced to O(n^2 ). To understand why, let's have a closer look.

Best Case, Worst Case, and Expected Case

To describe the runtime of our algorithm, we actually have three different ways: the best, worst and expected case. To illustrate this, let's have a look at this implementation of the Quick Sort algorithm.

func QuickSort(array []int) []int {
    n := len(array)
    pivotIndex := n - 1
    return partition(array, pivotIndex, 0, pivotIndex - 1)
}

func partition(arr []int, pivotIdx int, leftIdx int, rightIdx int) []int {
    if len(arr) < 2 {
        return arr
    }

    foundLeft := false
    for leftIdx < rightIdx && leftIdx < pivotIdx {
        if arr[leftIdx] > arr[pivotIdx] {
            foundLeft = true
            break
        }
        leftIdx++
    }

    foundRight := false
    for rightIdx >= leftIdx && rightIdx > -1 {
        if arr[rightIdx] < arr[pivotIdx] {
            foundRight = true
            break
        }
        rightIdx--
    }

    if foundLeft && foundRight {
        swap(arr, leftIdx, rightIdx)
        partition(arr, pivotIdx, leftIdx, rightIdx)
    }

    if leftIdx > rightIdx {
        swap(arr, leftIdx, pivotIdx)

        partition1 := arr[0:leftIdx]
        pivotIndex1 := len(partition1) - 1
        partition2 := arr[leftIdx:]
        pivotIndex2 := len(partition2) - 1

        partition(partition1, pivotIndex1, 0, pivotIndex1-1)
        partition(partition2, pivotIndex2, 0, pivotIndex2-1)
    }
    return arr
}

func swap(arr []int, leftIdx int, rightIdx int) {
    swap := arr[leftIdx]
    arr[leftIdx] = arr[rightIdx]
    arr[rightIdx] = swap
}
Enter fullscreen mode Exit fullscreen mode

If you are not familiar with the implementation of Quick Sort, I recommend this short video.
Quick Sort is a very efficient sorting algorithm which consists of recursively partitioning an array into two subsequent arrays around an initially determined pivot. The left array holds smaller values than the pivot, the right one bigger values. This implementation of quick sort sets the last element of the array as the initial pivot.

Now, let's have a look at what might happen
during execution.

  • Best case: All elements are equal. In this scenario, quick sort will just traverse through the array and return the array as it is. This leaves us with a complexity of O(n).
  • Worst case: The chosen pivot is repeatedly the highest element in the array. In this case, partitioning the array will only result in reducing the array by one element and will effectively end in a O(n^2 ) runtime.
  • Expected case: The above examples are rare scenarios which will, depending on the implementation of th algorithm, happen from time to time given a certain input. But on average, Quick Sort has a O(n * log n) runtime.

If you are not familiar with the Big O notation and you feel a bit confused by now, don't worry! We will take a step back
and have a close look at the most common scenarios of time complexity now.

Typical Big Os and their Complexity

Below, we can see a Big O complexity chart. It shows the different Big O types and their growth rate for an increasing
number of elements.

pic1

(Source: https://www.bigocheatsheet.com/)

In our leading example, we said, with Big O we can reduce a complexity of (n^2 )/2 - n/2 to O(n^2 ). The reason why we only focus on the dominant term of a function is, because it is dominating the growth rate in the long run.
In addition, we can drop the constants leading the dominant term. This might be a bit confusing in the beginning, but you can easily verify this behaviour with inserting very high values. The only thing which really interests us with Big O is, how the algorithm scales in a generalized way.

To demonstrate this, I will show some common examples of these Big O types and briefly explain why they perform as they do.

O(1)

Given a list, print the first element of that list. No matter how big that list is, we will always have only that one operation.
If there is any way, this is what we should aim for.

func PrintFirstElementOfList(list []int) {
   fmt.Println(list[0])
}
Enter fullscreen mode Exit fullscreen mode

O(log n)

The classical binary search algorithm is a perfect example. Given a sorted list, search a number by cutting the list in half and comparing its value to the ends of the two halves. Halving the input with each iteration leaves us with a time complexity of log n.

func BinarySearch(a int, sortedList []int) int {
    left := 0
    right := len(sortedList) - 1

    for left <= right {
        i := int(right + left / 2)

        if a == sortedList[i] {
            return i
        } else if a > sortedList[i] {
            left = i + 1
        } else {
            right = i - 1
        }
    }
    return -1
}
Enter fullscreen mode Exit fullscreen mode

O(n)

Traversing a list or array. One operation for each element. While it is still worse than O(log n), O(n) is fairly acceptable and easy to estimate.

func PrintEveryItem(list []int) {
   for i := 0; i < len(list); i++ {
      fmt.Println(list[i])
   }
}
Enter fullscreen mode Exit fullscreen mode

O(n log n)

This one is a bit more tricky and famous case for technical interviews. One classical example for this case is the Merge Sort algorithm. At first, a list is divided into halves until we have only lists with one element (log n) and then all elements are merged together iteratively (n). It is more complex than O(n) but still not as bad as polynomials.

func MergeSort(arr []int) []int {
   if len(arr) == 1 {
      return arr
   }

   left, right := divide(arr)

   return merge(MergeSort(left), MergeSort(right))
}

func divide(arr []int) ([]int, []int) {
   length := len(arr)
   m := int(length / 2)
   left := arr[0:m]
   right := arr[m:]
   return left, right
}

func merge(left []int, right []int) []int {
   merged := make([]int, len(left) + len(right))

   i := 0
   for len(left) > 0 && len(right) > 0 {
      if left[0] < right[0] {
         merged[i] = left[0]
         left = left[1:]
      } else {
         merged[i] = right[0]
         right = right[1:]
      }
      i++
   }

   for j := 0; j < len(left); j++ {
        merged[i] = left[j]
        i++
    }
    for j := 0; j < len(right); j++ {
        merged[i] = right[j]
        i++
    }

   return merged
}
Enter fullscreen mode Exit fullscreen mode

O(n^2 )

Whenever you deal with this pattern, be cautious. As in this example, when we print all possible pairs of an array of integers which leads us to a time complexity of O(n^2 ). Especially nested loops should let your alarm bells ring.
You should only consider these algorithms if you can estimate the input size.

func PrintAllPairs(list []int) {
   for i := 0; i < len(list); i++ {
      for j := 0; j < len(list); j ++ {
         fmt.Printf("%v:%v", list[i], list[j])
      }
   }
}
Enter fullscreen mode Exit fullscreen mode

O(2^n )

These algorithms are extremely bad, since they rise meteorically. As in this example, for a recursive calculation of the fibonacci numbers every additional n doubles the growth of the algorithm. Just avoid these.

func RecursiveFibonacci(num int) int {
   if num <= 1 {
      return num
   }
   return recursiveFibonacci(num - 2) + recursiveFibonacci(num - 1)
}
Enter fullscreen mode Exit fullscreen mode

O(n!)

The most trivial example for an algorithm with factorial time complexity is - well - factorial addition.

func FactorialAddition(n int) int {
   for i := 0; i < n; i++ {
      return factorialAddition(n-1)
   }
   return n
}
Enter fullscreen mode Exit fullscreen mode

Time vs. Space Complexity

By now, we should have a good understanding of how complexity is computed and described. So far, we only looked at the time it takes to execute an algorithm and the respective growth rate. In computation, besides time, space is one of our concerns which must be analysed.
If we look at our example of the beginning, the selection sort algorithm, the space complexity is O(1). We only need our swap variable to keep the current value to be swapped even though our time complexity is O(n^2 ). Compared to the O of time complexity, we now have to analyse how much space in memory has to be allocated. In most cases, space complexity is not as high as time complexity.
To illustrate this, we will have a look at a simple example.

First of all, we need to know much memory is used by different data types:

Type Size
bool, char, unsigned char, signed char, int8 1 byte
int16, short, unsigned short, wchar_t 2 byte
float, int32, int, unsigned int, long, unsigned long 4 byte
double, int64, long double, long long 8 byte

Now let's have a look at a simple sum function

// n is the length of the array
func sum(arr []int, n int) int {
    var result int // 4 bytes for result
    for i := 0; i < n; i++ { // 4 bytes for i
        result += arr[i]
    }
    return result
}
Enter fullscreen mode Exit fullscreen mode

Our space complexity scales linearly with the size of our array. Besides the three variables result, i and n which take up 3*4 bytes of memory, each element of the array takes up 4 bytes as well. This leads us to 4*n + 3*4 bytes. Drop the constants and we are left with O(n).

Summary

By now, you hopefully have an enhanced understanding of Big O and its role in software development. Even though, in your daily
business as a software developer you will most likely not be confronted with this topic, it is important to inherit these
principals to be able to analyse your code and make a profound statement about its complexity.

If you have any questions on this topic, feedback or you have any ideas on new topics that you want me to write about, feel free to contact me or follow me on Twitter.
Besides this being a way of improving my own learnings, I enjoy technical writing, so I am happy for new ideas and any discussions.

Top comments (0)