DEV Community

Cover image for O(1): Constantly Efficient
Eben Eleazer
Eben Eleazer

Posted on

O(1): Constantly Efficient

In the last post, we broke down O(n) aka linear time, because (in my opinion) it is the most straightforward common complexity to understand. As n (number/size of input) grows O (number of operations) grows at a similar rate.

The truth is, there's an even simpler common complexity. Not only that, it's also by far the most efficient.

Of course, i'm talking about O(1) or Constant Time.

Back to the Catering Company

If you'll remember from last time, we had a catering company who was hired to do an event. We decided to make an individual plate for everyone, and we therefore calculated our catering algorithm to be O(n).

But, what if we didn't have to make an individual plate for everyone?

What if instead just made one big pot of soup!

Then we wouldn't have to do anything all day! We could just make one giant pot of soup and make sure it's big enough for all the guests.

Just like before, let's pseudocode this up:

# n = number of guests

def cater_party (n)
   # make 1 big pot of soup
end

Enter fullscreen mode Exit fullscreen mode

So how does our number of operations grown in relation to our input size?

If there's 1 guest, we make 1 pot of soup.
If there're 100 guests, we make 1 pot of soup.
If there're 1,000,000,000 guests, we make 1 pot of soup.

Looking at our algorithm, and thinking about it logically, no matter how many people attend our event, we're still only going to make 1 pot of soup.

The number of operations, in our case, is constant. Our algo simply doesn't increase in complexity based on the input.

constant complexity

Remember, we can simplify constant values to 1 for the sake of simplifying our big-o notation (since the effects of constant values on complexity decrease to negligible at higher n values). No matter how many operations it actually takes, if it doesn't grow along with our input size, we might as well just consider it one big operation.

And so we get O(1).

The most efficient complexity

O(1) algorithms, because they are constant time, they are easily the most efficient complexity. A truly constant time algo, will process an infinite amount of data, in the same time it would take to do just 1.

A code example, that you've probably used, would be accessing an array element by index. In this case, since you know the index, it doesn't matter how large the array is because the program knows exactly where the element is.

There are other plenty of other algorithms with a complexity of O(1), and many other complex algorithms use them. In fact, you could probably think of most basic operations as algorithms with O(1) complexity.

Next time

Now that we've gone through O(n) and O(1) were going to start getting more complex

Top comments (0)