Umukoro Emmanuel

Posted on

# BIG O NOTATION IN DATA STRUCTURES AND ALGORITHM

In computer science the Big O notation is described as a metric for analyzing system performance. it measures the run time of a function or code block as the input size increases, hence it determines the complexity of your algorithm using algebraic terms.
This concept is peculiar to asymptotic notation as it was invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.

The arithmetic notation is as shown above, there exists several asymptotic notations that describes the overall complexities of a computer algorithms.

Big O : As earlier stated it gives the function limit that describes whether the complexity will be "less than or equal to" another function, outcome or to the worst case.

Big - Ω (Big Omega) : This provides the bounds of which a function will be greater than or equal to another. It gives a complexity that is to be "at least more than" the best case.

Big - Θ (Big theta) : This asympt0tic notation gives a complexity between the worst and best case.

Algorithmic complexity is a measure of how long and how much memory space is required for an algorithm to complete a given an input of size n. This can be viewed in two distinct sections:

1. Time complexities of Algorithms

This signifies the total time required by an algorithm to run till its completion. Time Complexity is most commonly estimated by counting the number of elementary steps performed by any algorithm to finish execution. There are different algorithm run time complexities:

• O(1) - Constant Time complexity: Here there would be no change in the run time for any given input. Hence the output time depends on a constant value and does not depend on the input size. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it.
``````TimeComplex = [1,4,5,2,7,3]
TimeComplex[2] #accessing any element takes constant time
``````
• O(N) - Linear Time complexity: The run time increases as the size of the input increases. hence the time complexity is directly proportional to the size of the input data. It is suitable in algorithms that require a loop through array elements or where the algorithm has to sequentially read its entire input. For example, a function that adds up all elements of a list requires time proportional to the length of the list.
``````TimeComplex = [1,4,5,2,7,3]
for i in TimeComplex:
print(i)
``````
• O(N^2) [Oh N squared] - Quadratic Time complexity: This represents and algorithm whose run time performance is directly proportional to the square size of its input data. a common example is the nested for loop that looks attt every index in an array twice.
``````TimeComplex = [1,4,5,2,7,3]
for i in TimeComplex:
for j in TimeComplex:
print(i)
``````
• O(LogN) - Logarithmic Time complexity: This represents and algorithm whose run time performance is proportional to the logarithm of its input data size. This is an algorithm to break a set of numbers into halves, to search a particular field and does not need to access all elements of its input. Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search.
``````TimeComplex = [1,4,5,2,7,3]
for i in range(1,len(TimeComplex),4):
print(TimeComplex[i])
``````
• O(2^n) - Exponential Time Complexity: They are referred to as Brute force algorithms whereby the algorithm growth rate doubles with each addition to the input (n), often iterating through all subsets of the input elements. This is obviously a not optimal way of performing a task, since it will affect the time complexity. Brute-Force algorithms are used in cryptography as attacking methods to defeat password protection by trying random strings until they find the correct password that unlocks the system.
1. Space Complexity of Algorithms Space complexity is the amount of working storage needed by an algorithm, it describes the memory used by the algorithm (including the input values to the algorithm) to execute and produce the result. It is a parallel concept to Time complexity and determines how the size of storage grows as input grows. To calculate the space complexity of an algorithm only the data space is considered. The data space refers to the amount of space used by the variables and constants.

Space-complexity like time complexity, also plays a crucial role in determining the efficiency of an algorithm/program. If an algorithm takes up a lot of time, you can still wait, run/execute it to get the desired output. But, if a program takes up a lot of memory space, the compiler will not let you run it.