In your journey to becoming a proficient software or data engineer, understanding algorithm complexity is crucial. It helps you make informed decisions about the efficiency of your code and ensures that your applications perform well even as the data grows.
In this article, we'll explore the concept of algorithm complexity, specifically focusing on time and space complexity and how to calculate them using Big O notation.
Table of Contents
- Introduction
- What is Algorithm Complexity?
- Asymptotic Notation
- How to Calculate Complexity
- Common Runtime Complexities
- Conclusion
Introduction
If you've ever been tasked with optimizing code, you've likely encountered the concept of algorithm complexity. At its core, complexity deals with the efficiency of an algorithm, focusing on how much time and space it requires to run as the input size grows.
What is Algorithm Complexity?
Algorithm complexity refers to the amount of time and space required by an algorithm to complete its execution. It helps us understand how efficient an algorithm is and how it scales with the size of the input data.
Time Complexity
Time complexity measures how the runtime of an algorithm increases with the size of the input. It answers the question: "As we add more data, how much longer will our algorithm take?"
It is a measure of how much time an algorithm takes to complete its execution as a function of the input size. It helps us understand the performance of an algorithm in terms of time.
Space Complexity
Space complexity measures how much additional memory an algorithm needs as the input size grows. It answers the question: "As we add more data, how much more memory will our algorithm use?"
It is a measure of how much memory an algorithm uses as a function of the input size. It helps us understand the performance of an algorithm in terms of memory.
Sometimes, you might prioritize one over the other. For instance:
- In a mobile app with limited memory, you might prioritize space efficiency.
- In a real-time system where quick responses are crucial, time efficiency might be more important.
Asymptotic Notation## Asymptotic Notation
When discussing algorithmic complexity, we use asymptotic notation to describe how the runtime or space usage grows as the input size approaches infinity. There are three main notations:
- Big O Notation (O)
- Represents the upper bound or worst-case scenario
- Example: O(n) means the algorithm will never be slower than linear time
- Big Omega Notation (Ω)
- Represents the lower bound or best-case scenario
- Example: Ω(log n) means the algorithm will never be faster than logarithmic time
-
Big Theta Notation (Θ)
- Represents both upper and lower bounds or average-case scenario
- Example: Θ(n) means the algorithm is always linear time
How to Calculate Complexity
Calculating complexity involves analyzing how the number of operations grows with the input size. It involves counting the number of operations and expressing them in terms of the input size (usually n
).
How to calculate time complexity
To calculate the time complexity, we can follow these steps:
- Count the number of operations (primitive operations)
- Express the number of operations in terms of the input size (usually
n
) - Keep only the highest order term (the term with the largest growth rate)
- Drop constants
NOTE 🔥
The steps above can also be applied to space complexity excepts we are dealing with memory which means we are counting the number of variables and the size of the input data instead of the number of operations.
Take for example we have the following function:
function sumArray(arr) {
let sum = 0; // 1 operation
for (let i = 0; i < arr.length; i++) {
// n operations (where n is the array length)
sum += arr[i]; // n operations (where n is the array length)
}
return sum; // 1 operation
}
In this example, the function sumArray
takes an array arr
as input and returns the sum of its elements. Let's analyze the complexity following the steps we mentioned earlier:
Step 1: Count the number of operations
- Initialization:
let sum = 0;
- 1 operation - Loop:
for (let i = 0; i < arr.length; i++)
-n
operations (wheren
is the length of the array, i.en = arr.length
) - Summation:
sum += arr[i];
-n
operations - Return:
return sum;
- 1 operation
So, the total number of operations is 1 + n + n + 1 = 2n + 2
.
Step 2: Express the number of operations in terms of the input size (usually n
)
The total number of operations is 2n + 2
.
Step 3: Keep only the highest order term (the term with the largest growth rate)
From our expression 2n + 2
, the highest order term is 2n
. So we keep 2n
.
Step 4: Drop constants
From 2n
, we drop the constant 2
. So we are left with n
.
So, the time complexity of the sumArray
function is O(n)
.
NOTE: As
n
approaches infinity (meaning, asn
gets larger and larger says 1000, 10000, 100000, etc.), the constant2
becomes insignificant, so we can simplify the complexity toO(n)
.
How to calculate space complexity
When it comes to space complexity, we are interested in the amount of memory used by the algorithm as a function of the input size.
We usually use this formular to calculate space complexity:
Space Complexity Formular
Space Complexity = Auxiliary space + Input size
NOTE 🔥
Auxiliary space is the extra or temporary space (excluding input size) used by the algorithm while Input size is the space taken by the input data.
For any algorithm, memory is required for the following:
- To store program instructions
- To store constant values
- To store variables
- To handle function calls, jumping statements, etc.
Let's look at an example to understand this better.
function sumArray(arr) {
let sum = 0; // i unit of space
for (let i = 0; i < arr.length; i++) {
// n operations (where n is the array length)
sum += arr[i]; // n operations (where n is the array length)
}
return sum; // 1 operation
}
from the above function, we have the following:
- Auxiliary space: 0
- Input size:
n
So, the space complexity is O(n)
.
In the context of algorithm complexity, primitive operations refer to the basic operations that a computer can perform in a single step. These operations are typically simple and fundamental, such as:
- Arithmetic operations: Addition, subtraction, multiplication, and division.
- Variable assignments: Assigning a value to a variable.
- Comparisons: Checking if one value is greater than, less than, or equal to another.
- Logical operations: Operations like AND (&&), OR (||), and NOT (!).
- Array access: Accessing an element in an array by its index.
When analyzing the time complexity of an algorithm, we count these primitive operations to estimate how the runtime of the algorithm grows as the size of the input increases. This helps in understanding the efficiency of the algorithm.
Common Runtime Complexities
Finally before we go, let's look at some common runtime complexities with their respective code implementations:
Common Runtimes
Here are some common time complexities you'll encounter:
1. Constant time - O(1)
- Runtime doesn't increase with input size
- Example: Accessing an array element by index
Code Implementation
function getFirstElement(arr) {
return arr[0];
}
Illustration
2. Logarithmic time - O(log n)
- Runtime increases logarithmically with input size
- Example: Binary search
Code Implementation
function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
Illustration
3. Linear time - O(n)
- Runtime increases linearly with input size
- Example: Linear search
Code Implementation
function linearSearch(arr, target) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === target) return i;
}
return -1;
}
Illustration
4. Polynomial time - O(n^k)
- Runtime is a polynomial function of input size
- Example: Nested loops
Code Implementation
function printPairs(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) {
console.log(arr[i], arr[j]);
}
}
}
5. Exponential time - O(2^n)
- Runtime doubles with each addition to the input
- Example: Recursive Fibonacci
Code Implementation
function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
6. Factorial time - O(n!)
- Runtime grows factorial with input size
- Example: Generating all permutations
Code Implementation
function generatePermutations(arr) {
if (arr.length <= 1) return [arr];
let result = [];
for (let i = 0; i < arr.length; i++) {
let rest = [...arr.slice(0, i), ...arr.slice(i + 1)];
let perms = generatePermutations(rest);
result.push(...perms.map((perm) => [arr[i], ...perm]));
}
return result;
}
Conclusion
In this article, we've explored the concept of algorithm complexity, specifically focusing on time and space complexity and how to calculate them using Big O notation. We've also looked at how to calculate the space complexity of an algorithm.
In our next article, we'll be focusing on sorting algorithms, their implementations and their complexities.Until then,
Stay Updated and Connected
To ensure you don't miss any part of this series and to connect with me for more in-depth
discussions on Software Development (Web, Server, Mobile or Scraping / Automation), data
structures and algorithms, and other exciting tech topics, follow me on:
Stay tuned and happy coding 👨💻🚀
Top comments (0)