Understanding time complexity is complicated, if I'm honest, I had to read articles over and over and watch videos just to fully understand the purpose of it. Calculating time complexity was also a challenge because I have a tendency to confuse my self quite often. My goal writing part 1 of this blog, is to explain time complexity and understand its definitions it in a not so confusing way.
So, what is time complexity?
Time complexity is an analysis used in programming to time how long it takes to run an algorithm given a number of inputs(n), as well as how long it takes to complete the task. This is usually often referred to as Big-O notation, you might have heard this phrase many times before. Here is a list of common time complexities used with definitions.
"O(1) — Constant Time: it only takes a single step for the algorithm to accomplish the task.
O(log n) — Logarithmic Time: The number of steps it takes to accomplish a task are decreased by some factor with each step.
O(n) — Linear Time: The number of of steps required are directly related (1 to 1).
O(n²) — Quadratic Time: The number of steps it takes to accomplish a task is square of n.
O(C^n) — Exponential: The number of steps it takes to accomplish a task is a constant to the n power (pretty large number)."(freecodecamp, Algorithms in plain English: time complexity and Big-O notation, 2016
In the cover image from https://learnovercoffee.com/algorithms-datastructures/basics-of-big-o-notation/ it displays on a chart the time it takes for each algorithm to run. This is a good example to visually display how long it takes for each algorithm to complete each task, it is also a good example to differentiate the different time complexities.
In the future blogs, I plan to go in-depth explaining each Time Complexity to simplify it and show examples. I like to keep things simple and straight to the point, I hope you enjoyed. If there is anything you suggest I add to this post please by all means send me a message.