DEV Community

Cover image for Introduction to Time Complexity with ruby
Ankit Pariyar
Ankit Pariyar

Posted on

Introduction to Time Complexity with ruby

Table of Contents

 1. Some terms

       1.1. Time complexity

       1.2. Space complexity

       1.3. Worst/Best/Average
 2. Rules for calculating Big O
 3. O(n)
 4. O(n^2)
 5. O(log n)
 6. When we have to loop through two inputs

Big O notation is a way to describe the efficiency of algorithms in terms of time and space complexity. It helps programmers understand how an algorithm's performance changes as the input size increases. It provides a concise notation to compare and analyze algorithms based on their scalability.

Some terms

Time complexity

  • The time complexity of an algorithm is the number of steps it takes to complete its task
  • measure how much time code will take
  • Time complexity refers to the measurement of the amount of time an algorithm takes to run, as the input size increases. It helps determine how the algorithm's execution time grows relative to the input size.

Space complexity

  • measure how much memory code will take
  • Space complexity refers to the measurement of the amount of memory or space, an algorithm requires to run, as the input size increases. It helps determine how the algorithm's memory usage grows relative to the input size.

Worst/Best/Average

  • "O" (pronounced "big O"): Represents the upper bound or worst-case scenario
  • "Ω" (pronounced "big omega"): Represents the lower bound or best-case scenario
  • "θ" (pronounced "theta"): Represents the average-case scenario

All the things describe below are for Time complexity

Rules for calculating Big O

  • Worst Case: When calculating the Big O notation, we focus on the worst-case scenario
  • Remove Constants: We ignore constants or coefficients, as they have a minimal impact on the overall growth rate.
def findMax(array)
  max = array[0]                  # Constant time O(1)
  array.each do |element|        # O(n)
    if element > max
      max = element
    end
  end
  return max
end
Enter fullscreen mode Exit fullscreen mode

In this example, we have a function findMax that takes an array as input and returns the maximum element. Let's apply the Big O rules:

  • Worst Case: The worst-case scenario occurs when the maximum element is located at the end of the array, requiring us to iterate through all elements.
  • Remove Constants: The constant time operation of assigning max = array[0] is insignificant compared to the iterative loop.

Thus, the time complexity of the findMax function is O(n) according to Big O notation.

  • Drop Non Dominants
def print_pairs_and_elements(array)
  array.each do |element|
    puts element
  end

  array.each do |outer_element|
    array.each do |inner_element|
      puts "#{outer_element}, #{inner_element}"
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

In this example, the print_pairs_and_elements function takes an array as input. It first iterates over the array once in the first each loop and prints each element. This portion of the code has a time complexity of O(n) since it scales linearly with the size of the input array.

Following that, the function has nested each loops that iterate over the array twice. For each element in the outer loop, it iterates through the entire array in the inner loop and prints out the pair of elements. This portion of the code has a time complexity of O(n^2) since it scales quadratically with the size of the input array so this function will have time complexity of O(n^2 + n)

When analyzing the overall time complexity, we drop the non-dominant term, which is O(n) in this case. This is because, as the input size increases, the quadratic term O(n^2) will dominate the overall performance.

Therefore, we simplify the time complexity to O(n^2) because the quadratic term has a greater impact on the efficiency of the function compared to the linear term.

O(n)

  • represents a linear time complexity, where the execution time of an algorithm grows linearly with the input size
def linear_search(array, target)
  array.each do |element|
    return true if element == target
  end
  return false
end
Enter fullscreen mode Exit fullscreen mode

In this example, the linear_search function takes an array and a target value as inputs. It iterates over each element in the array and checks if it matches the target value. If a match is found, it returns true; otherwise, it continues to the next element. If the entire array is traversed without finding a match, it returns false.

The time complexity of this algorithm is O(n) because the number of iterations directly depends on the size of the input array. As the array grows larger, the execution time of the algorithm will roughly increase in a linear fashion.

  • Another example
def print_twice(array)
  array.each do |element|
    puts element
  end

  array.each do |element|
    puts element
  end
end
Enter fullscreen mode Exit fullscreen mode

In this example, the print_twice function takes an array as input. It iterates over the array twice using separate each loops and prints each element twice. As a result, the number of iterations is directly proportional to the size of the array.

When analyzing the time complexity, we can observe that the algorithm performs two iterations over the array, resulting in O(2n). However, in Big O notation, we drop the constant factor, so the time complexity simplifies to O(n). This is because the growth rate of the algorithm is ultimately determined by the input size, not by the specific constant factor.

O(n^2)

  • representing quadratic time complexity
def print_pairs(array)
  array.each do |outer_element|
    array.each do |inner_element|
      puts "#{outer_element}, #{inner_element}"
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

In this example, the print_pairs function takes an array as input. It uses nested each loops to iterate over the array twice. For each element in the outer loop, it iterates over the array again in the inner loop and prints out the pair of elements.

The total number of iterations in this algorithm is directly proportional to the square of the input size. If the array has 'n' elements, then the number of iterations will be approximately n * n, resulting in O(n^2) time complexity.

O(log n)

  • representing logarithmic time complexity
def binary_search(array, target)
  low = 0
  high = array.length - 1

  while low <= high
    mid = (low + high) / 2
    if array[mid] == target
      return true
    elsif array[mid] < target
      low = mid + 1
    else
      high = mid - 1
    end
  end

  return false
end
Enter fullscreen mode Exit fullscreen mode

In this example, the binary_search function takes a sorted array and a target value as inputs. It performs a binary search algorithm to find the target value within the array.

The binary search algorithm divides the search space in half at each step. It compares the middle element of the current range with the target value and narrows down the search to the left or right half accordingly. This process continues until the target value is found or the search space is exhausted.

The number of iterations required in the binary search algorithm grows logarithmically with the input size. As each iteration reduces the search space by half, the algorithm achieves a time complexity of O(log n), where 'n' represents the size of the input array.

Logarithmic time complexity indicates that the algorithm's efficiency improves significantly as the input size increases. It demonstrates the advantage of divide-and-conquer strategies in reducing the search space efficiently.

When we have to loop through two inputs

Suppose we have a function that takes two arrays of sizes n and m as inputs and performs some operations on them. The time complexity of this function can be expressed as O(f(n, m)), where f represents the growth rate of the algorithm with respect to both n and m.

To determine the overall time complexity, we consider the dominant factors impacting the algorithm's performance. Here are a few scenarios:

  • Independent Operations: If the algorithm performs independent operations on the two input arrays, the time complexity is determined by the larger of the two inputs. In this case, the time complexity can be simplified to O(max(n, m)).
  • Nested Loops: If the algorithm uses nested loops, with one loop dependent on the size of n and the other dependent on the size of m, the time complexity can be expressed as O(n * m).
  • Sequential Operations: If the algorithm performs sequential operations, where the operations on the second array depend on the results of the operations on the first array, we sum the time complexities of each operation. The overall time complexity is determined by the dominant term. For example, if the first operation is O(n) and the second operation is O(m), the overall time complexity is O(n + m).

Top comments (0)