DEV Community

Jose Torreblanca
Jose Torreblanca

Posted on

Understanding Big O Notation: A Beginner's Reflection on Writing Better Programs

When I started learning about programming, I focused on writing code that simply worked. But as I continued to grow as a developer, I realized that writing efficient programs is just as important as making them functional. One of the tools that helped me gain this understanding is Big O Notation.

In this article, I'll share what I have learned about Big O Notation, why it matters, and how to measure it. I'll focus on the three most common time complexities: O(1), O(n), and O(n²).

Big O graph


What is Big O Notation?

Big O Notation is a way to measure how the performance of an algorithm changes as the size of the input grows. It tells us about the time complexity (how long it takes to run) or space complexity (how much memory it uses).

By understanding Big O, developers can:

  • Compare the efficiency of different solutions.
  • Identify bottlenecks in code.
  • Write scalable programs that perform well even with large inputs.

Think of Big O Notation as a measure of how an algorithm "scales" rather than how fast it runs on your specific machine. It gives a general idea of performance trends.


Key Time Complexities: O(1), O(n), O(n²)

Let’s dive into the three time complexities I have learned so far.

1. O(1) - Constant Time

An algorithm has O(1) time complexity if it always takes the same amount of time to complete, regardless of the input size.

Example:

function getFirstElement(array) {
    return array[0];
}
Enter fullscreen mode Exit fullscreen mode

Here, the function retrieves the first element of an array. No matter how large the array is, this operation happens in constant time.

  • Why it matters: O(1) operations are the most efficient because their execution time does not increase with the size of the input.

2. O(n) - Linear Time

An algorithm has O(n) time complexity if the execution time grows linearly with the input size.

Example:

function printAllElements(array) {
    for (let i = 0; i < array.length; i++) {
        console.log(array[i]);
    }
}
Enter fullscreen mode Exit fullscreen mode

Here, the function iterates through every element of the array. If the array has 10 elements, the loop runs 10 times. If the array has 1,000 elements, it runs 1,000 times.

  • Why it matters: O(n) operations scale reasonably well but can become slower with very large inputs.

3. O(n²) - Quadratic Time

An algorithm has O(n²) time complexity if its execution time grows quadratically with the input size. Typically, this happens when we have nested loops.

Example:

function printPairs(array) {
    for (let i = 0; i < array.length; i++) {
        for (let j = 0; j < array.length; j++) {
            console.log(array[i], array[j]);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Here, for each element in the outer loop, the inner loop runs through the entire array. If the array has 10 elements, this results in 100 operations (10 * 10).

  • Why it matters: O(n²) operations can quickly become inefficient as input size grows. For example, with 1,000 elements, this would require 1,000,000 operations!

How to Measure Big O Notation

To determine the Big O of a function:

  1. Count the basic operations in your code, like loops, function calls, and calculations.
  2. Focus on the dominant term — the part of the code that grows the fastest as input size increases.
  3. Ignore constants and smaller terms because they become negligible as the input size grows.

Let’s analyze an example:

function exampleFunction(array) {
    console.log(array[0]);       // O(1)

    for (let i = 0; i < array.length; i++) {   // O(n)
        console.log(array[i]);
    }

    for (let i = 0; i < array.length; i++) {   // O(n)
        for (let j = 0; j < array.length; j++) {   // O(n)
            console.log(array[i], array[j]);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode
  1. The first operation console.log(array[0]) runs in O(1) time.
  2. The first loop runs in O(n) time.
  3. The nested loop runs in O(n²) time.

To find the overall time complexity, we combine these:

  • O(1) + O(n) + O(n²)
  • The dominant term is O(n²), so the time complexity of this function is O(n²).

Final Thoughts

Learning about Big O Notation has been a game-changer for me. It helps me think critically about the efficiency of my code and make better decisions when choosing data structures and algorithms. By understanding time complexities like O(1), O(n), and O(n²), I can now:

  • Identify the strengths and weaknesses of my solutions.
  • Optimize my code for better performance.
  • Avoid writing programs that work but scale poorly.

As I continue learning, I'll explore more complex time complexities and data structures. For now, this reflection gives me a solid foundation to write better and more efficient programs.

Happy coding!

Key Takeaways:

  1. Big O Notation measures how algorithms scale with input size.
  2. O(1): Constant time - execution time remains the same.
  3. O(n): Linear time - execution time grows proportionally with input size.
  4. O(n²): Quadratic time - execution time grows quadratically due to nested loops.
  5. Always focus on the dominant term when analysing code.

I hope this reflection helps other beginners like me! Understanding these basics will pave the way for solving more challenging problems efficiently.

Top comments (0)