DEV Community

Cover image for DATA STRUCTURES AND ALGRORITHMS 102: DEEP DIVE INTO DATA STRUCTURES AND ALGORITHMS
Dorothy
Dorothy

Posted on

DATA STRUCTURES AND ALGRORITHMS 102: DEEP DIVE INTO DATA STRUCTURES AND ALGORITHMS

Introduction

In the previous article, we looked at an introduction to data structures and algorithms and we had a sneak peek into algorithm analysis. In this article, we'll delve deeper into algorithm analysis by looking at one of the most common ways of analyzing the efficiency of an algorithm- the BIG O Notation. We will also look at the implementation of big O on arrays and linked lists. All the codes in this article are written using the Python programming language.

What is the Big O Notation?

According to Wikipedia, the Big O notation is " a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity"

Simply put, the big O notation measures how drastically the space and time requirements grow as the size of input grows.
The big O notation is used to :

  • analyze the efficiency and performance of an algorithm as its input grows to infinity.
  • measure the scalability of an algorithm, that is how well the algorithm scales as the input grows really large.

To understand this, it is important to note that some operations on some data structures are more or less costly compared to other data structures. For example, it is fast to access an element in an array using an index. However, arrays have fixed lengths and to constantly add or remove items from them, you need to resize them. Therefore, it is wise to use a linked list instead since they grow and shrink very quickly.

Types and Features of time complexities

When analyzing algorithms, we look at space complexity and time complexity. Time complexity is a representation of the amount of time required by the algorithm to execute to completion. Space complexity looks at additional space allocated relative to the size of input. This means there has to be a trade-off between saving time or saving space. When there is more space, we use that to optimize an algorithm and make it faster and more scalable.
Note: Time Complexity considers how many times each statement executes and not the actual time the code is executed.

Constant Time O(1)

This means constant time regardless of the input of the algorithm. A constant is a step that does not scale with input to the function. When calculating the time complexity of an algorithm, the constants are ignore since they are higher in efficiency performance and the big O analysis looks at the worst case scenario.

constant

For example, retrieving the first element of an array.


def firstElement(data):
    return data[0]

Enter fullscreen mode Exit fullscreen mode

Linear Time O(n)

The cost of the algorithm grows as the input grows. Simple loops run in linear time

linear

For example, printing the values in an array

def getElements(data)
   for value in data:
     print(value)

Enter fullscreen mode Exit fullscreen mode

Quadratic Time O(n^2)

The algorithm runs in quadratic time. This means the performance is directly proportional to the square or the cube of the size of the input. Nested loops run in logarithmic time. It is slower than O(n) depending on the size of the input

quadratic

Example:

for x in data:
    for y in data:
        print(x, y)
Enter fullscreen mode Exit fullscreen mode

Logarithmic time O(log n)

The size of the input data is reduced in each step therefore, it slows down at some point. For example:

for index in range(0, len(data), 3):
    print(data[index])
Enter fullscreen mode Exit fullscreen mode

An algorithm with logarithmic time is more scalable and more efficient than one with linear time or quadratic time. E.g. When using binary search
Image description

Exponential complexity O(2^n)

The runtime of the algorithm gets doubled after every addition in the input. It is not scalable as it becomes very slow very soon

exponential

For example, recursive calculation of Fibonacci numbers:

def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n-1) + fibonacci(n-2)
Enter fullscreen mode Exit fullscreen mode

The Big O notation of various data structures

Arrays

Arrays are used to store a list of elements sequentially in memory.
We use arrays in situations where we want to store a list of elements and want to access them using their index.
When an array is initialized, space in memory is allocated for the array and the starting variable is pointed to that address in memory. Each element is the assigned a fixed amount of memory.

array

Time complexity of operations in arrays

Accessing elements

Since elements are indexed, elements are directly accessed. Therefore, the run time complexity for accessing an element is O(1). This means the time complexity of accessing an element from an array with 1 element or 1000 elements is the same.

Inserting and deleting elements

For arrays, the time complexity of inserting and removing elements is dependent on the position in the array where the operation is being done.

  • End of the array, the time complexity is O(1) since its just the last element that is affected. The built-in methods that can be used are push and pop
  • Beginning of an array-all the elements are affected. For example, if an element is removed from index 0, the element at index 1 will now shift to index 0, the one at index 2 will shift to index 1 and so on. Therefore, the number of operations grows with the number of elements in the array, O(n). The built-in methods used are Shift and unshift
Searching for elements

The worst case scenario for a search method is checking every element in the array. Therefore the time complexity will be O(n).

Linked Lists

These are similar to arrays as they store a list of elements sequentially but they grow and shrink automatically.
A linked list is a group of nodes in sequence where each node holds 2 pieces of data: the value and the address of the next node in the list. Therefore each node references the next node. The first node is known as the Head and the last node is known as the Tail.

linked-list
Time complexity of operations in Linked lists

Accessing elements

For a linked list, to access an element, you must transverse through the whole list. Hence the time complexity would be directly proportional to the number of elements in the list, O(n)

Inserting or removing elements
  • Beginning of the list-inserting or removing a new element will involve creating/removing a new node with an address that points to the old head. Therefore, the time complexity will be constant O(1)
  • End of the list-inserting an element at the end of the list involves traversing the whole list, creating a new node and adjusting the previous node’s address for the next node.
  • Middle of the list-traverse the list up until the specific point and then adjust the pointers with Big O(n)

Conclusion

In this article, we've attempted to explain the Big O notation with reference to arrays and linked lists as simply as possible. When working with data structures, it is important to take into consideration what actions you will be performing on them so as to have efficient algorithms.

Top comments (0)