DEV Community

Cover image for How to calculate time and space complexity? (DSA - 2)
Madhav Ganesan
Madhav Ganesan

Posted on • Edited on • Originally published at madhavganesan.hashnode.dev

How to calculate time and space complexity? (DSA - 2)

Finding time and space complexity for a problem might be very difficult until you read this blog:)

Time complexity:

The time complexity of an algorithm is the amount of time it takes for each statement to complete.

Worst case:

It is a scenario in which the algorithm performs the least efficiently or takes the maximum amount of time or resources among all possible input data.
It is denoted by Big Oh (O). It is used to calculate the maximum time taken by an algorithm to execute completely.

Average case:

The average-case time complexity of an algorithm is the scenario in which the algorithm performs with typical efficiency or resource usage, calculated as the mean performance across all possible inputs of a given size.
It is denoted by Big Theta (Θ).It is used to calculate the average time taken by an algorithm to execute completely.

Best case:

The best-case time complexity of an algorithm is the scenario in which the algorithm performs the most efficiently or takes the least amount of time or resources, occurring under the most favorable conditions for the input data.
It is denoted by Big Omega(Ω).It is used to calculate the minimum time taken by an algorithm to execute completely.

Types of time complexities:

O(1) – Constant time complexity
O(n) – Linear time complexity
O(log n) – Logarithmic time complexity
(It breaks the problem into smaller chunks per each invocation)
O(N logn) – Linearithmic time complexity
(It breaks the problem into smaller chunks per each invocation and then takes the results of these smaller chunks and stitches them back together)
O(n*2) – Quadratic time complexity
O(2^n) – Exponential time complexity

Image description

Example on how to calculate time complexity:

var a = 0; //C1
for (int i = 0; i < N; i++) { //C2(n+1)
    for (int j = 0; j < N; j++) { //C2(n+1)
        a = a + j; //C1
    }
}
Enter fullscreen mode Exit fullscreen mode

Tn = 2C1 + C2(n+1)*C2(n+1) = C2(n^2) + nC2 + C2 + 2C1
Time complexity: O(n^2)

Example where best,worst and average differs:

#include <iostream>
#include <vector>
#include <cstdlib>
#include <ctime>
using namespace std;

void exampleFunction(int N) {
    vector<int> arr(N);
    for (int i = 0; i < N; ++i) {
        arr[i] = rand() % 100; // Fill the array with random values
    }

    // Worst case scenario (O(N^2))
    bool flag = false;
    for (int i = 0; i < N; ++i) {
        if (arr[i] == 50) {
            flag = true;
            break;
        }
    }

    if (!flag) {
        for (int i = 0; i < N; ++i) {
            for (int j = 0; j < N; ++j) {
                cout << arr[i] << " " << arr[j];
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Worst Case (Big-O): O(N^2)
When the target value (50) is not found in the array, the nested loops will execute, resulting in O(N^2).

Average Case (Theta): Θ(N^2)
On average, the code has to scan half the array to find the target value (linear search takes N/2 on average) and might still need to execute the nested loops, resulting in
Θ(N^2).

Best Case (Omega): Ω(N)
If the target value (50) is found in the first position, the linear search completes in constant time, and the nested loops are not executed, resulting in
Ω(N).

Other Examples:

#include <iostream>
using namespace std;

int fibonacci(int n) {
    if (n <= 1) {
        return n;
    }
    return fibonacci(n - 1) + fibonacci(n - 2);
}

int main() {
    int n = 10;
    cout << "Fibonacci of " << n << " is " << fibonacci(n);
    return 0;
}
Enter fullscreen mode Exit fullscreen mode

Time complexity: O(2^N)
Space complexity: O(N)

Space Complexity:

The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem

How to calculate space complexity?

1)Identify Memory Usage:

a) Variables and Constants:
It uses O(1) complexity regardless of size.

b) Dynamic Data Structure:
For every data structure, the space complexity differs.
Ex.
Array - O(n)
Linked List - O(n)
Hash Map - varies

c) Recursion:
For recursive algorithms, account for the space used by the call stack. Each recursive call uses space proportional to the depth of the recursion.
Depth of Recursion: For a recursion depth of
d, the space complexity is O(d). If the depth is proportional to the input size n, then it is O(n).

Image description

2) Consider In-Place Modifications
Some algorithms modify the input data structure in place, which may result in O(1) space complexity if no additional data structures are used.
Ex. Bubble and Insertion sort

Examples on how to calculate space complexity:

#include <iostream>
#include <vector>
using namespace std;

void example4(int n) {
    vector<vector<int>> dp(n, vector<int>(n)); 

    for (int i = 0; i < n; i++) {
        for (int j = 0; j < n; j++) {
            dp[i][j] = i + j; // O(1) operation
        }
    }
    // Space used is the 2D vector, O(n^2)
}
Enter fullscreen mode Exit fullscreen mode

Space complexity: O(n^2)

Image description


Feel free to reach out if you have any questions or need further assistance. 😊📁✨

Top comments (2)

Collapse
 
jonrandy profile image
Jon Randy 🎖️ • Edited

I can probably count on one hand the number of times that Big O notation has been useful in a (so far) 30 year career as a developer. I've certainly never been asked about it, or asked a candidate about it in interviews.

Collapse
 
madgan95 profile image
Madhav Ganesan

Good to hear that!
But I was asked 🥲