DEV Community

Discussion on: Big O Notation for beginners!!

Collapse
 
cubiclesocial profile image
cubiclesocial

For those who want a TL;DR of big-O, it usually boils down to loops. If you have one loop over a data structure, it's probably O(N). If you have one loop inside another, it's probably O(N^2). Three loops deep, O(N^3). And so on. Loops inside loops inside loops aren't always obvious! One function may have a loop and, inside that loop, call another function that has a loop which, in turn, calls another function that has a loop, and so on. Eventually you get WordPress.

Algorithmic complexity is more than just loops. They involve data structures. And, depending the underlying data structure, your own algorithms will be impacted. A map, for instance, has O(log N) time for all operations. A hash table tends to have O(1) time for all operations unless there are massive key collisions, which can turn it into a O(N^2) data structure really fast and take out your full stack architecture at the same time (e.g. youtube.com/watch?v=R2Cq3CLI6H8).

By the way, big-O has an often-ignored little brother called C. C involves the time and resources required to complete a single operation. So you could have an amazing O(N) algorithm BUT a massive C value that makes it perform as badly as an equivalent O(N^2) algorithm with a very small C value. An example of this is inserting 1 million nodes into an array. If you insert them one at a time and individually allocate RAM for each and every node, it will take much, much longer to complete than to add a precalculation step to determine exactly how many nodes you will need, perform a single memory allocation big enough to hold all the nodes, and then run the main algorithm using the allocated buffer. The difference can be substantial - waiting for 500ms vs. waiting for 15 seconds for the operation to complete (or even minutes vs. hours/days in some cases). Heap allocations are the single most expensive operation in any OS and the major cause of a large C value.

At the end of the day, big-O isn't terribly critical if the software runs well. Some exceptions: A satellite that will be placed in orbit or a medical device that will be used in a hospital. When the software does NOT run well, there are usually tools available to debug the bottleneck. And, if that doesn't work, having a really good developer handy to look at the code can help pinpoint the issue. A good rule of thumb though is: If you can keep your function call chain fairly flat (under 10 functions deep), you'll probably get decent performance out of your application and not run into too many issues related to algorithmic complexity.

Those are my thoughts on the topic.

Collapse
 
tracycss profile image
Jane Tracy 👩🏽‍💻

This is great. I am definitely learning a lot about the topic.
Thanks for sharing your thoughts, much appreciated. :)