DEV Community

Cover image for Big O Notation and the Student Software Engineer
Westin Humble
Westin Humble

Posted on • Updated on

Big O Notation and the Student Software Engineer

Greetings from (not so sunny) Brooklyn, NY during the early stages of year 2022! I’ve recently begun Flatiron’s Software Engineering 15 week immersive program, and what better way to expand-upon concepts (that the program can often only afford an honorable mention) than blogging? For this series of three blogs, I want to focus on material that I’ve found particularly interesting and how it benefits the student software engineer to at least have a cursory understanding. The first such concept is Big O notation.

When first learning the basics of Javascript and how to build software/craft solutions for web development, little attention is paid to the efficiency of the algorithms employed. This is understandable, given that it seems like the equivalent of learning an entire written and spoken language in (typically) a relatively short timeframe. At first, the most critical take-aways are the tools you have at your disposal and how/when they are used. Global variables are plentiful, every function (no matter how often it is used) is named. You may even try to make the most deeply nested loop imaginable just to see if you can make it work for a specific purpose!

At least in the bootcamp setting, this sandbox phase of programming comes to an end quite quickly. Much of this is for readability and best-practice reinforcement. But in the world of web development where one can’t make accurate assumptions about how up-to-date most user’s hardware/operating system is, it becomes important to your code as efficient (i.e. accomplishing as much as it can while using as little resources as possible) as can be. One way to estimate this is Big O notation.

Invented by German mathematicians Paul Bachmann and Edmund Landau well before electronic computers were viable, Big O notation describes the limiting behavior of a function when the argument tends towards a particular value or infinity. As with many mathematical concepts and notations, Big O has been coopted by by other mathematical theorems as well as for more applied applications, as is the case in computer science. It’s important to note that Big O notation in computer science does not and cannot directly measure a particular algorithm’s complexity/its effect on a given computer’s hardware. It’s also important to note that most algorithms run so quickly/efficiently that their use of resources is negligible.

So where does Big O notation come into play? For the student software engineer, I believe it comes down to having an understanding of how to categorize an algorithm’s runtime efficiency (expanded upon below) and when to start thinking about your program’s runtime efficiency and the effect it might have on your user’s experience. For the latter, the rule of thumb is to start reducing complexity/using the most optimal tools when you are writing a program that processes a large amount of input data, performs complex operations, and generates a large amount of output data.

When it comes comes categorization of algorithmic efficiency, it is my understanding that it is not unheard-of to be asked to categorize algorithms according to Big O notation in technical interviews for jobs. Accurate categorization demonstrates that the interviewee has at least an understanding of what/when to avoid come time to start building code snippets and making pull requests.

The most common categorizations of time/space complexity using Big O notation in web development are constant, logarithmic, linear, and quadratic. Both time and space complexity are measured relative to the the size of the input (i.e. the steps necessary for the algorithm to accomplish its task). Space complexity also tends to be harder to estimate given variation between environments and programming languages. Of note, both time and space complexity can be viewed as an inverse relationship, where (within reason) sacrificing one can benefit the other.

At the highest level, Big O notation describes how many steps an algorithm takes based on the number of elements that it is acted upon, and classifies it according to the worst case scenario.

A practical, newbie friendly, non in-depth guide of the most common categorizations are below:

  • Constant O(1). Where “1” represents the amount of steps taken to complete the function, an example would be performing a search using an element’s know index value.
  • Linear O(n). Where “n” represents the amount of data to be traversed, an example would be iterating through an array, with time complexity increasing by one step per element.
  • Logarithmic O(logN). These algorithms are characterized by the the number of operations increasing by one every time the data is doubled. A classic example of using a logarithmic algorithm is searching for a specific name in a phone book. Rather than searching through the whole phonebook, it’s better to start by not looking in in the letter directory where you know their name will not occur. These are especially useful algorithms for large data sets.
  • Quadratic O(N^2). Used to characterize algorithms that are quite slow, complexity is proportional to the square of the size of the inputs (e.g. if the input array has 10 elements it will perform 100 operations). An example being a function that traverses through an array twice to find duplicates or a one that requires nested iteration.

Two dimensional graph displaying most common Big O notation categorizationshttps://miro.medium.com/max/1400/1*yiyfZodqXNwMouC0-B0Wlg.png

For further elaboration, below are some built-in array methods within Javascript and their associated Big-O notation classification (IF used on an array). Consider what the method does, the required steps, and the output (if any):

.indexOf( ) = O(n)
.push( ) = O(1)
.unshift( ) = O(n)
.pop( ) = O(1)
.shift( ) = O(n)
.slice( ) = O(n)

Need a too long/didn’t read version? For the beginning software engineer, always keep algorithmic efficiency in the back of your mind (along with what tools work best for which scenarios), and make sure to understand the most common categorizations come time for technical interviews in job applications! This was a very condensed overview of a big world when it comes to time/space complexity in software engineering algorithms. There’s a lot to know, and a lot yet to be fleshed out. Feel free to drop a comment with questions, critiques, feedback, or just to say hi! Thanks for reading!

Final Note === Here’s a nifty web-based tool for directly measuring the time complexity of your algorithms. Just pick a language, paste your code, and give it a run:

https://tio.run/#

Sources (url):

https://www.bigocheatsheet.com/

https://www.britannica.com/science/computer-science/Information-management

https://jackkrupansky.medium.com/what-is-algorithmic-complexity-or-computational-complexity-and-big-o-notation-9c1e5eb6ad48

https://towardsdatascience.com/the-big-o-notation-d35d52f38134

https://blog.webpagetest.org/posts/benchmarking-javascript-memory-usage/#:~:text=At%20the%20median%2C%20sites%20are,and%20~9.6MB%20for%20mobile.

Top comments (0)