Many job opportunities for software engineering and development today require JavaScript knowledge; it is the only language that can be used to write the entire stack. Let's go over Big O in JS.
{ author: @edisonpebojots } #DEVCommunity dev.to/epebojot/dataโฆ23:06 PM  30 Jun 2020
Adrian Twarog ๐ฆ@adrian_twarogJavaScript is like water. It can adapt to fit almost any shape.
If you put JS on the front end, it can become the front end. If you put JS on the backend, it can become the backend. Now, JavaScript can execute, or it can crash.
Be like JavaScript my friend.01:35 AM  30 Jun 2020

Part Table of Contents Description 01 BigO Notation By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed). 02 JavaScript Unique Parts BigO is important for analyzing and comparing the efficiencies of algorithms. The analysis of BigO starts by looking at the code, and, applying the rules, applying the rules is because to simplify the BigO notation linear or quadratic rule is not enough. 03 JavaScript Numbers This part 3 will focus on JavaScript number operations, number representation, Number objects, common number algorithms, and random number generation. 04 JavaScript Strings This part 4 will focus on strings, JavaScript String object, and the String objectโs builtin functions. You will learn how to access, compare, decompose, and search strings for commonly used reallife purposes. In addition, the chapter will explore string encoding, decoding, encryption, and decryption. 05 JavaScript Arrays As a JavaScript developer, you will use the array often; it is the most commonly used data structure. Arrays in JavaScript come with a lot of builtin methods. By the end of this part, you will understand arrays and choose the right method 06 JavaScript Object This part will focus on what JavaScript objects are, how they are declared, and how their properties can be changed. In addition, this part will cover how JavaScript classes are implemented using prototypal inheritance. Also this part will be short. 07 JavaScript Memory Management A variable takes up some memory. In C, the programmer allocate and deallocate memory manually. In contrast, modern JavaScript engines have garbage collectors that delete unused variables. However, there are pitfalls(unexpected) that developers can fall into(โ) This part will show these unexpected and present techniques to help the garbage collector minimize the memory problems. 08 Recursion This part 8 introduces the concept of recursion and recursive algorithms(Remember they are different, we will discuss them later(๐)). We will also discuss the definition of recursion and fundamental rules for recursive algorithms. 09 Sets This part focuses on the concepts of sets from both a mathematical definition and on the implementation. Also, Common set operations, as well as their implementations, are covered in great detail (๐ก). 10 Searching and Sorting This part 10 focuses on searching and sorting for arrays. By the end of this part 10, you will understand how to use sorting and searching algorithms for arrays. Also, this article is a bit complicated for beginners, so as much as possible the visual aids is your friend (๐). (๐) 11 Hash Tables A hash table is a fixedsized data structure in which the size is defined at the start. This part 11 explains how hash tables work, and the method of generating a unique key. By the end of this part 11, you will understand various hashing techniques and know how to implement a hash table. (๐) 12 Stacks and Queues This part 12 covers stacks and queues(pronounce as kyooz (๐)) not (kwewe) okay? hehehe (๐ ); both are data structures used in the implementation of complex data structures. You'll learn what the stacks and queues are, how they're used, when they're used, and how to implement them (๐) Let's go! (๐ฅ๐ฅ๐ฅ) 13 Linked Lists A linked list is a data structure in which each node (or element) points to another node. Unlike arrays, which have a fixed size, a linked list is a dynamic data structure. By the end of this part 13, you will understand how to implement and work with linked lists. And oh! (๐ฎ) There are two types of linked lists: singly (โก๏ธ) and doubly (โ๏ธ). Letโs examine the singly linked list first.(๐) Let's go! (๐ฅ๐ฅ๐ฅ) 14 Caching Caching is the process of storing data into temporary memory so that it can be easily retrieved for later use if it is required again. As an example, a database keeps data cached to avoid rereading the hard drive, and a web browser caches web images to avoid redownloading. In this part 14, two caching techniques will discussed: LFU and LRU caching. 15 Trees A general tree data structure is composed of nodes with children nodes. The top node is called the root node. This part 15 will explore different types of trees such as binary trees, binary search trees, and selfbalancing binary search trees. First, this part 15 will cover what trees are and how they are structured. Then, it will cover methods of traversing(crossing or taking a zigzag path) the tree data structure in detail. Finally, you will learn about binary search trees and selfbalancing binary search trees to understand how to store searchable data. (๐) 16 Heaps A heap is an important data structure that returns the highest or lowest element in O(1) time. This part 16 will focus on explaining how heaps are implemented as well as how to work with them. One example is heap sort, which is a sorting algorithm based on heaps. 17 Graphs In this Part 17, you will learn graph basics, including fundamental terminology and graph types. The Part 17 will also cover working with these different graph types and methods of representing graphs in data structures. Finally, algorithms for traversing, searching, and sorting graphs are explored to solve problems such as finding the shortest path between two graph nodes. (๐) 18 Advance Strings Part 18 will cover more complex string algorithms than covered in the previous section. Now that you have heard of certain other data models or structures, they should be easier to comprehend. Specifically, Part 18 will focus on string searching algorithms. (๐) 19 Dynamic Programming Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (โฌ๏ธ). To explain dynamic programming, letโs re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (๐) 20 Bit Manipulation Bit manipulation is an advanced topic that JavaScript developers typically do not need to know. However, you should learn a bit about bit manipulation if you want to implement highperformance serverside code. Understanding bit manipulation requires some knowledge of digital logic. Any introductory course in discrete math or circuits would be helpful to understand these concepts.
Introduction (โ๏ธ)
The motivation (๐ฅ๐ฅ) for writing this was the lack of available information about JavaScriptwritten data structures and algorithms, this is true, because as you can see, most of our data structure and algorithm book was written in Java and C++:
10 Best Books for Data Structure and Algorithms for Beginners in Java, C/C++, and Python  by javinpaul  Javarevisited  Medium
javinpaul ใป ใป 11 min read
Medium
I found this strange (๐) because many of the job opportunities for software engineering and development today require JavaScript knowledge; it is the only language that can be used to write the entire stack, including the front end and back end. (๐)
JavaScript follows the prototypal inheritance pattern, unlike Java and C++ which follow the inheritance pattern only(sorry Java and C/C++ ๐ ), there are some changes in writing data structures in JavaScript. The inheritance pattern allows inheritance by creating a blueprintlike form that objects follow during inheritance. Here is an example of inherentance in Java,
๐ Inheretance in Java:
class Vehicle {
protected String brand = "Ford";
public void honk() {
System.out.println("Tuut, tuut!");
}
}
class Car extends Vehicle {
private String modelName = "Mustang";
public static void main(String[] args) {
Car myFastCar = new Car();
myFastCar.honk();
System.out.println(myFastCar.brand + " " + myFastCar.modelName);
}
}
๐ Output:
Tuut, tuut!
Ford Mustang
However, the prototypal inheritance pattern means copying the objects (๐๐) and changing their properties. Here is an example of prototype in JavaScript,
๐ Inheretance in JavaScript:
function Sandwich(bread, ham, butter, cheese, tomato, cucumber) {
this.bread = bread;
this.ham = ham;
this.butter = butter;
this.cheese = cheese;
this.tomato = tomato;
this.cucumber = cucumber;
}
Sandwich.prototype.log = function () {
return "Let's add some " + this.bread + " and " + this.cheese;
}
var sandwich = new Sandwich("bread", "ham", "butter", "cheese", "tomato", "cucumber");
sandwich.log();
๐ Output:
Let's add some bread and cheese
Part 01: BigO Notation (๐ฑ ๐ฅ โก)
By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed).
BigO Notation Primer (๐ฎ)
The BigO notation measures the complexity of an algorithm. In BigO notation, n represents the number of inputs. The question is: โWhat will happen as n approaches infinity? (โโ)โ BigO notation tells you how efficient the algorithm is.

Figure 10. Common BigO Complexities (Live)

Figure 11. Common BigO Complexities (Static)
The figure 10 and 11 illustrate these common time complexities as live and static.
Common Examples (๐ค)
See figure 11, ${O(1)}$ does not change. Hence, ${O(1)}$ is referred to as being constant time. ${O(n)}$ is linear time. And since ${O(n)}$ is linear time, an example of an ${O(n)}$ algorithm below is printing numbers from $0$ to $n1$ , as shown here:
Linear Time: (๐)
Similarly, ${O(n^2)}$ is quadratic time, and ${O(n^3)}$ is cubic time. Examples of these complexities are shown here:
Quadratic Time: (๐)
Cubic Time: (๐ฃ)
Finally, an another example logarithmic time complexity is printing elements that are a power of 2 between 2 and n. For example, exampleLogarithmic(100) will print the following:
1, 2, 4, 8, 16, 32, 64
Rules of BigO Notation
Letโs represent an algorithmโs complexity as ${f(n)}$ . n represents the number of inputs, $f\lparen n \rparen_{time}$ represents the time needed, and $f\lparen n \rparen_{space}$ represents the space (additional memory) needed for the algorithm. It can be challenging to calculate ${f(n)}$ . But BigO notation provides some fundamental rules that help developers compute for ${f(n)}$ .
 Coefficient rule: If ${f(n)}$ is ${O(g(n))}$ , then ${kf(n)}$ is ${O(g(n))}$ , for any constant ${k>0}$ .
 Sum rule: If ${kf(n)}$ is ${O(h(n))}$ and ${g(n)}$ is ${O(p(n))}$ , then ${f(n)}$ ${+g(n)}$ is ${O(h(n)+p(n))}$ .
 Product rule: If ${f(n)}$ is ${O(h(n))}$ and ${g(n)}$ is ${O(p(n))}$ , then ${f(n)g(n)}$ is ${O(h(n)p(n)}$ .
 Transitive rule: If ${f(n)}$ is ${O(g(n))}$ and ${g(n)}$ is ${O(h(n))}$ , then ${f(n)}$ is ${O(h(n))}$ .
 Polynomial rule: If ${f(n)}$ is a polynomial of degree k, then ${f(n)}$ is ${O(n^k)}$ .
 Log of a power rule: ${log(nk)}$ is ${O(log(n))}$ for any constant ${k>0}$ .
Note: Don't try to understand them for now. Just give your special attention to the first three rules and the polynomial rule because they are the most commonly used. Iโll discuss each of those rules in the following sections.
Coefficient Rule: โGet Rid of Constantsโ (๐ฏ)
This rule simply requires you to ignore any noninputsize constants. Coefficients(or constant) in BigO are negligible with large input sizes. Therefore, this is the most important rule of BigO notations.
If ${f(n)}$ is ${O(g(n))}$ , then ${kf(n)}$ is ${O(g(n))}$ , for any constant ${k>0}$ .
This means that both
${kf(n)}$
and
${f(n)}$
have the same BigO notation of
${O(g(n))}$
if they are constant. Here is an example of a code block for
${f(n)}$
:
function a(n){
var count =0;
for (var i=0;i<n;i++){
count+=1;
}
return count;
}
This block of code has
${f(n)=n}$
. This is because it adds to count n times. Therefore, this function is
${O(n)}$
. Here's another example code block for for
${kf(n)}$
:
function a(n){
var count =0;
for (var i=0;i<5*n;i++){
count+=1;
}
return count;
}
This block has
${kf(n)=5n}$
. After all, the first two examples both have a BigO notation of
${O(n)}$
or
${O(g(n))}$
from the above coefficient rule. Another following code block demonstrates another function with a linear time complexity but with an additional operation which is count+=1:
function a(n){
var count =0;
for (var i=0;i<n;i++){
count+=1;
}
count+=1;
return count;
}
This block of code has ${f(n)=n+1}$ (in linear time, it can also be ${n1}$ ). There is ${+1}$ from the last operation (count+=1). This still has a BigO notation of ${O(n)}$ . This is because the one operation is not dependent on the input n like we said earlier.
Sum Rule: โAdd BigOs Upโ (โ)
The sum rule is easy to understand; time complexities can be added. Imagine a master(which is function ${a(n)}$ ) algorithm that involves two other algorithms(which is the for loop). The BigO notation of that master algorithm is simply the sum of the other two BigO notations.
If ${f(n)}$ is ${O(h(n))}$ and ${g(n)}$ is ${O(p(n))}$ , then ${f(n)+g(n)}$ is ${O(h(n)+p(n))}$
Note:It is important to remember to apply the coefficient rule after applying this rule.
The following code block demonstrates a function with two main loops whose time complexities(quadratic time) must be considered independent:
function a(n){
var count =0;
for (var i=0;i<n;i++){
count+=1;
}
for (var i=0;i<5*n;i++){
count+=1;
}
return count;
}
In this example, ${f(n)=1n}$ (or 1n), and ${g(n)=5n}$ . This results to 6n because ${f(n)+g(n)=6n}$ (or ${h(n)+p(n)}$ ). That means, ${O(h(n)+p(n))}$ is ${6n}$ or ${O(h(n)+p(n))=6n}$ . However, when applying the coefficient rule, the final result is ${O(n)=n}$ . This is because both 2 loops or operation abide from this coefficient rule ${f(n)}$ is ${O(g(n))}$ , then ${kf(n)}$ is ${O(g(n))}$
Product Rule: โMultiply BigOsโ (โ)
The product rule simply states how BigOs can be multiplied.
If ${f(n)}$ is ${O(h(n))}$ and ${g(n)}$ is ${O(p(n))}$ , then ${f(n)g(n)}$ is ${O(h(n)p(n))}$
The following code block demonstrates a function with two nested for loops(remember that this is a quadratic time inside product rule):
function (n){
var count =0;
for (var i=0;i<n;i++){
count+=1;
for (var i=0;i<5*n;i++){
count+=1;
}
}
return count;
}
In this example, ${f(n)=5nxn}$ because the second loop has ${5nxn}$ which runs ${5n}$ times. Therefore, this results in a total of ${5n^2}$ operations. Applying the coefficient rule, the result is that ${O(n)=n^2}$ .
Polynomial Rule: โBigO to the Power of kโ (๐)
The polynomial rule states that polynomial time complexities have a BigO notation of the same polynomial degree.(Look at basic Polynomial subject)
If ${f(n)}$ is a polynomial of degree ${k}$ , then ${f(n)}$ is ${O(n^k)}$
The following code block has only one for loop with quadratic time complexity(quadratic time because n*n is equal to 2 loop):
function a(n){
var count =0;
for (var i=0;i<n*n;i++){
count+=1;
}
return count;
}
In this example, ${f(n)=n^2}$ because the first loop runs n*n iterations which is equivalent ${n^2}$ in accord to polynomial rule ${f(n)}$ is a polynomial of degree ${k}$ , then ${f(n)}$ is ${O(n^k)}$ .
Summary (๐)
BigO is important for analyzing and comparing the efficiencies of algorithms. The analysis of BigO starts by looking at the code, and, applying the rules, applying the rules is because to simplify the BigO notation linear or quadratic rule is not enough. The following are the most often used rules:
 Eliminating constant
 Adding up BigO Notations
 Multiplying BigO Notations
 Determining the polynomial of the BigO notation
Discussion
Great organizing and visuals , but The example of Inheritance it's not right.
java script will link the objects together , sorry my English is not good , But I will try to explain . It's like pointers in C bird will assign to car , so they are the same in memory if you change one the other will be effected by it.
Yeah, you're right. This is one of the example of inheritance from the Mozilla Developer Network without Prototype: (developer.mozilla.org/enUS/docs/W...)
But I will fix that using prototype. The example is actually bad. This is base from the Mozilla Developer Network: (developer.mozilla.org/enUS/docs/W...)
Testing:
function Sandwich(bread, ham, butter, cheese, tomato, cucumber) { this.bread = bread; this.ham = ham; this.butter = butter; this.cheese = cheese; this.tomato = tomato; this.cucumber = cucumber; } Sandwich.prototype.log = function () { return "Let's add some " + this.bread + " and " + this.cheese; } var sandwich = new Sandwich("bread", "ham", "butter", "cheese", "tomato", "cucumber"); sandwich.log();
I'm looking forward to the next post already!
Learn Data Structure & Algorithm in JavaScript  Part 02
Edison Pebojot ใป Jul 1 ใป 8 min read
Did you mean the logarithmic power rule?
Can you please clarify so that I can fix any of my typo. Thank you in advance! (๐)
Great post! The JS community really needs more of this. Looking forward to reading the rest of your posts. I would like to add that JS isn't the only full stack lang tho. Definitely one of the most popular, but there are others.
Nicely done!
Thank you so much! ๐
What a terrible example of "inheritance pattern".
The animal bird part?
I will always post every 3 times in a week! ๐