What is Big O Notation?
Big O notation shows how an algorithm's performance scales with input size. It measures efficiency and program performance by revealing how runtime grows with input size. Time complexity is a concept that demonstrates how the runtime of a function increases as input size increases. It's important in technical interviews and code comparisons, as it measures the number of operations and space complexity. Understanding it helps develop faster and more efficient applications.
Let's look at some of the notations used to describe runtime algorithms like Big O Notation.
For example: Suppose we want to buy a car and want to know how many liters it takes to drive 100 miles. Depending on various factors, the performance of a car can vary. For instance, if you drive on a highway, it may consume 10 liters of fuel, while in city traffic it might consume 20 liters, and under mixed conditions, it may consume 15 liters to cover a distance of 100 miles.
Algorithm performance is measured using best, worst, and average case scenarios. Greek letters Omega, Theta, and Big O are used to represent these cases. Big O is commonly used in the industry.
Big O - O(1)
When an algorithm has a "constant time complexity," it means that it takes the same amount of time to run, regardless of the input size. This is determined by analyzing how the number of operations changes with input size. For instance, if a function is designed to "multiply numbers," and it only performs one operation, it will have constant time complexity. One way to analyze it is by using a deck of cards. Removing the first card at random is a task that takes constant time, as it doesn't require searching through the entire deck. The graph for O(1) time complexity is a flat line across the bottom, making it the most efficient big O time complexity. This means that no matter how many elements there are, the number of operations remains constant.
Big O - O(n)
O(n) time complexity means that a function's running time grows in direct proportion to the input data size. A simple function that loops from zero to "n" has a linear time complexity. The function performs the operation a corresponding number of times as the input grows larger. When we pass the function the number "n," it runs "n" times. This is what O(n) represents. To further illustrate this concept, let's use the analogy of a deck of cards. Suppose we have a deck of cards and want to select a specific card, let's say the ten of hearts. In order to find that card, we would need to go through each card until we locate it. Although there is a possibility that it could be the first card, it is highly unlikely. Now, imagine the deck of cards is filled with hundreds of other cards, none of which are the ten of hearts. Your search time directly depends on the size of the deck of cards. This example demonstrates linear time complexity, which is represented by O(n). O(1) time complexity is constant, while O(n) time complexity increases linearly with the number of elements. The graph plots the number of operations against input size, with O(1) represented by a flat line and O(n) represented by a straight line.
Drop Constants
Dropping constants means ignoring specific numbers in analysis because they don't matter as input size increases.
Let me give you an example. Imagine we have an algorithm that takes n units of time to run. If we double the input size will take roughly 2n units of time. Triple the input size, and it will take about 3n units of time. Notice how the constant (2 or 3) doesn't change the overall pattern of the algorithm's growth.
def print_items(n):
for i in range(n):
print(i)
for j in range(n):
print(j)
The Big O notation is a method of analyzing algorithms that focuses on the variable n, which is the most significant factor. By dropping constants, we can compare various algorithms and determine how their efficiency changes with input size.
Big O - O(n^2)
One way to compare every number in a list is by using nested loops. Begin by comparing the first number with all the others, then move on to the second number and repeat the process until you reach the last number. Each time two numbers are compared, an operation is performed.
The number of operations needed for a list of n numbers is n^2. This is because the first loop runs n times, and for each iteration, the second loop also runs n times. The number of operations increases rapidly as the list gets bigger. Nested loops with iterations that depend on the input size have a time complexity of O(n^2). This means that as the input size increases, the number of operations grows rapidly. This is not considered efficient for solving problems.
Droping Non Dominant Term
When working with two loops that have a time complexity of O(n^2), it's important to focus on the dominant term. Removing the non-dominant term simplifies the time complexity to O(n^2). This means that as the number of elements increases, the function takes more time to run, and the increase in time is proportional to the square of the number of elements. To simplify time complexity in Big O notation, we can remove non-dominant terms like O(n^2) and O(n) to get the desired outcome. Always drop non-dominant terms when simplifying.
Big O - O(logN)
To search for a number in a sorted array, use divide and conquer. It's faster than linear search, with a time complexity of O(log n).
A more efficient approach to finding a target in an array of numbers is the divide-and-conquer method. This method only requires log2(n) steps, with n representing the size of the array.
A real-world example that demonstrates a logarithmic search is searching for a specific card in a deck of ordered cards. Suppose we are looking for the 10 hearts in a deck that is ordered by suit - diamonds, clubs, hearts, and spades. To find the target card, we would first divide the deck in half to narrow the search to just the bottom suits. Then, we would divide the bottom half in half again to narrow it down to just the hearts suit. We would continue dividing the pile of hearts in half until the 10 of hearts is found. This logarithmic divide and conquer approach efficiently find the target card in log n steps compared to a linear search of each card.
Space Complexity
Calculating the space complexity of an algorithm involves determining the additional memory it needs. Recursive functions often use O(n) space, but not all functions need this much space. A pair sum function can use O(1) space. Understanding space complexity helps optimise memory usage. Using a logarithmic time complexity is more efficient than a linear one. The logarithmic algorithm has a flatter graph, which makes it better for searching specific items in large datasets.
When analyzing time complexity with multiple inputs, it's crucial to recognize that the information may vary in size. For sequential loops, time complexities are combined, whereas for nested loops, they should be multiplied.
A nested loop example is O(a*b) time.
Pattern is:
- Do this, than do that → add time complexities
- Do this FOR EACH time you do that → multiply time complexities Important interview questions to understand time complexity with multiple inputs.
There are standard rules to follow when calculating Big O for code:
- Assignments and if statements are O(1)
- Simple loops are O(n)
- Nested loops are O(n^2)
- Dividing loop counter by 2 is O(log n)
- Finally, when dealing with multiple statements, we need to add them together.
We can employ an iterative algorithm to find the most significant number in an array. The function begins by assigning the first element to a variable and then proceeds to iterate through the remaining parts of the array.
Inside the loop:
- Checks if the current element is bigger than the largest
- If so, assigns the current to the largest Once the loop is complete, the program will output the largest number from the array. This is a straightforward approach for identifying the maximum value in an array. The question is how to analyze the time complexity using Big O:
- Assignment is O(1)
- Loop is O(n)
- If the statement inside the loop is O(1)
- Assignment in the loop is O(1)
- Print is O(1)
- Sum as O(1) + O(n) + O(1)
- Drop non-dominant O(1) terms
- Final complexity is O(n) These rules help analyze time complexity systematically.
def findBIggestNumber(sampleArray):
biggestNumber = sampleArray[0]
for index in range(1,len(sampleArray)):
if sampleArray[index] > biggestNumber:
biggestNumber = sampleArray[index]
print(biggestNumber)
You can follow this repository it will show you how to these time complexities appear on code: https://github.com/azaynul10/The-Complete-Data-Structures-and-Algorithms-Course-in-Python/blob/main/timeComplexities.py
Top comments (0)