DEV Community

Catherine Kawara
Catherine Kawara

Posted on

Data Structure and Algorithms 102: Deep Dive into Data Structure and Algorithms

In my previous DSA article, we did a basic introduction to data structures and algorithms. Today we are going to talk about something a little bit more complex Time Complexity. For this article, I will be using Ruby for the examples.

When writing code it is important to consider the amount of time it would take for the code to complete a specific task even as the input size grows. Time complexity determines how an algorithm is going to perform as the input size grows. Input size in this context is the arguments a method takes. If a method takes in a string as an argument, that is the input. The length of the string becomes the input size.

Big O Notation

Big O notation describes the complexity of your code using algebraic terms. Big O notation can express an algorithm's best, worst, and average-case running time. For our purposes, we're going to focus primarily on Big O as it relates to time complexity and we'll be covering four main time complexities.
There's more on the Big O notation here.

Constant Runtime: “O (1)”

O (1) means that it takes a constant time to run an algorithm, regardless of the size of the input.

arr = [3, 1, 6, 9, 10, 2]
def print_all(arr)
   puts "#{arr[0]}" # prints out 3
   puts "#{arr[1]}" # prints out 1
end
print_all(arr)
Enter fullscreen mode Exit fullscreen mode

Linear Runtime: “O (n)”

O (n) means that the run-time increases at the same pace as the input. One of the most common linear-time operations is traversing an array. Methods like each and map run through the entire collection of data, from start to finish.

arr = [3, 1, 6, 9, 10, 2]
def print_all(arr)
   arr.each do |num|
      puts "#{num}"
   end
end
print_all(arr)
Enter fullscreen mode Exit fullscreen mode

Exponential Runtime: “O (n²)”

O(n²) means that the calculation runs in quadratic time, which is the squared size of the input data.
In programming, many of the more basic sorting algorithms have a worst-case run time of O(n²):

  • Bubble Sort
  • Insertion Sort
  • Selection Sort

In the example below we have a nested loop, which means that depending on the size of the array, the outer loop will make its first iteration, and then the inner loop will iterate over the ENTIRE array before it goes back to the second iteration of the outer loop and it will continue on until the outer loop reaches the end of the array. That’s a LOT of iterations. Even an array with as little as 1,000 elements would create a million operations.

def print_all(arr)
   arr.each do |letter1|
      arr.each do |letter2|
         puts "#{letter1}" + "#{letter2}"
      end
   end
end
print_all(["A", "B", "C"]) # prints out 9 pairs
print_all(["A", "B", "C", "D"]) # prints out 16 pairs
Enter fullscreen mode Exit fullscreen mode

Logarithmic Runtime: “O (log n)”

O(log n) means that the running time grows in proportion to the logarithm of the input size. This means that the run time barely increases as you exponentially increase the input. This is a highly efficient runtime. An example that always has logarithmic runtime is Binary Search. When you have a sorted array, a binary search will continually halve your data by checking if the item is larger or smaller than a given middle.

def binary_search(n, arr)
  middle = arr.length / 2
  i = 0
  j = arr.length - 1

  while i < j
    if arr[middle] == n
      return true
    elsif arr[middle] < n
      i = middle + 1
      middle = i + j / 2
    else
      j = middle - 1
      middle = i + j / 2
    end
  end
  false
end 
Enter fullscreen mode Exit fullscreen mode

That's all for today, it might be a lot to take in so take your time to try and understand. Here's a cheat sheet that might be of great help.
Till next time, Happy coding ✌!

Top comments (2)

Collapse
 
ducfilan profile image
Duc Filan

This is not 102!

Collapse
 
algot profile image
AlgoT

101.2? Never learnt this stuff, so found this lil intro useful (though it is a lil...lil)