DEV Community

Yulin
Yulin

Posted on

Efficiency of Programs: What is the Big O Notation?

When we talk about efficiency of a program, the first thing jumps to mind is the running time. The actual running time of your program depends on

  • what machine it is running on
  • what other tasks the machine is doing asynchronously

As memory requirement of your program increases, the program slows down, not because the machine is running out of memory but the program is competing with other programs running on computers. The OS is putting things in and out of memory, the higher the memory load the more time you lose for the program to not be in memory.

Therefore to measure the efficiency of your algorithm, time is not the best metric, instead count the number of primitive constant time operations.

Orders of Growth: The Big O notation

In mathematical terms, we say running time t as a function of input size n t(n), is a member of, say quadratic time O(n |-> n^2), but doesn't equate to it. The constant factors have been omitted because we're considering large inputs which the constant factors are miniscule in comparison to the variable factor.

The Big Theta Notation often used interchangeably with the Big O Notation, it tells you how closely related t(n) and n are.

Worth noting that Big O Notation represents the worst case scenario (read more)

If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).

There are some common families of time complexity per type of algorithms:
1.For searching algorithms, in order of increasing time:

  • Θ(1) constant time, e.g. hash table
  • Θ(log n)
  • Θ(n) linear time

2.For sorting algorithms, in order of increasing time:

  • Θ(n log n), e.g. cutting data structure in halves, and continuously sorting in halves in a binary search tree

  • Θ(n^2) quadratic time

3.For matrix algorithms:

  • Θ(n^2)

4.Intractable running times:

Theoretically we can write programs to have such running time, but practically the program would never finish running for large input size. You need an approximate solution that is good enough:

  • Θ(2^n)
  • Θ(n!)
  • Θ(n^n)

Case Study: Recursion and Iteration

In recursion, each recursive call to the function has to set aside memory for its input, until a base case is found then the inner-most procedure reports back to the parent procedures and starts evaluation on the way out. This isn't a question about the running time, but the memory usage which is linear to the size of the problem.

Iterative approach does evaluations on the way in, when you reach the base case the whole computation is already complete. All the work is done prior to the recursive call, there's no need to remember which procedure is waiting for the intermediate answer and only one chunk of memory is needed for the final answer.

Conclusion

First get the program to work, and then worry about efficiency. You should not take preference of iterative because it is more efficient, it is better to get the procedures running correctly and readable. An ounce of mathematics is worth a pound of computing.


Credit to Berkeley The Structure and Interpretation of Computer Programs lectures.

Top comments (0)