DEV Community

Cover image for The insider's guide to algorithm interview questions
Amanda Fawcett for Educative

Posted on

The insider's guide to algorithm interview questions

Algorithms are a big part of coding interviews, especially at the big 5 tech companies (Google, Microsoft, Facebook, Apple, Amazon). We’ll take a look at some common algorithms that you’ll need to know for an upcoming interview, ways to make them more efficient (as that is a common question asked in interviews), and at the end we’ll give you some practice problems.

Let’s get started.

Overview of the basics

Basic concepts that you’ll want to review are:

  • Conditional statements: if-else statements, switch statements, and conditional expressions
  • Loops: for loops, while loops, do-while loops
  • Functions: Iterative functions and recursive functions
  • Arrays

Once you have the basics down, you can start to move into intermediate concepts that you should know before scheduling your technical interview; specifically, this post will show you some key algorithmic paradigms (and ways to make them efficient) which are fundamental to learning how to break down any coding problem in an interview.

Ready to ace your coding interview? Check out these courses: Algorithms and Complexity Analysis: an interview refresher.


Asymptotic Analysis — boosting the efficiency of your program

Before we dive into algorithmic paradigms, something should be said for the time complexity and efficiency of computer programs in relation to algorithms — a concept known as asymptotic analysis. In an interview, you may not be asked to calculate the complexity of an algorithm directly, but you might be asked to calculate the complexity of an algorithm you wrote or to improve upon the complexity of an algorithm given to you.

  • Course: Big O Notation for Coding Interviews
  • Article: Big O Notation: A primer for beginning devs

Complexity is an approximate measure of the efficiency of an algorithm and is associated with every algorithm you write. This is something that all programmers must be constantly aware of. There are two kinds of complexities: time and space. Time complexity and space complexity are essentially approximations of how much time and how much space an algorithm will take to process certain inputs respectively. Typically, there are three tiers to solve for:

  • Best case — represented as Big Omega or Ω(n)
  • Average case — represented as Big Theta or Θ(n)
  • Worst case — represented as Big O Notation or O(n)

Big O is preferred to analyze an algorithm, as average and best cases do not give insight to the efficiency of an algorithm for most use-cases.


Finding the Big O complexity of an algorithm

If you’re in an interview and are asked to find the Big O complexity of an algorithm here is a general rule of thumb:

  • Drop the leading constants
  • Ignore the lower order terms

Example: Find the Big O complexity of an algorithm with the time complexity 3n³ + 4n + 2 This simplifies to O(n³).

Asymptotic Analysis — How to calculate time complexity without an equation given to you

There are three steps you’ll want to take when calculating the time complexity of an algorithm:

  • List down all the basic operations in the code
  • Count the number of times each gets executed
  • Sum all the counts to get an equation

Here’s a simple example that measures the time complexity of a for loop of size n. Here is a loop of size n:

#include <iostream>
using namespace std;
int main(){
  int n = 10;  //  0(1)
  int sum = 0; // 0(1)
  for (int i=0; i<n; i++)
    sum+=2; // 0(1)
  cout << sum; // 0(1)
  return 0;
}
Enter fullscreen mode Exit fullscreen mode

First, split the code into individual operations and then compute how many times it is being executed, which will look like this:

Alt Text

After counting how many times each operation is executing, you just add all of these counts to get the time complexity of this program.

Time Complexity = 1 + 1 + 1 + (n + 1) + n + n + 1 = 3 + (n + 1) + 2n + 1 => 3n + 5

General tips for asymptotic analysis:

  • Every time a list or array gets iterated over c x length times, it is most likely in O(n) time.
  • When you see a problem where the number of elements in the problem space gets halved each time, it will most probably be in O(logn) runtime.
  • Whenever you have a singly nested loop, the problem is most likely in quadratic time.

Useful Formulae for calculating time complexity of an algorithm:

Alt Text


What are algorithmic paradigms?

Algorithmic paradigms are “general approaches to the construction of efficient solutions to problems”; in other words they are a method, strategy, or technique to solving a problem and are essential for every programmer. Spend time learning these, as you will most likely use one or two of these algorithms in an interview.

Algorithmic paradigms are great because they lay down a framework suited for solving a broad range of diverse problems. Examples include:

  • Divide and conquer — a pattern that breaks the problem into smaller subproblems which are then solved for recursively and combined (great for tree sorting).
  • Backtracking — an algorithmic-technique for solving problems by trying to build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy the constraints of the problem.
  • Dynamic programming — an optimization algorithm for recursive functions. Using dynamic programming, you can store the results of subproblems, so that we do not have to re-compute them when needed later.

What algorithms should I be aware of for my interview?

  • Brute Force — This method requires us to go through all the possibilities to find a solution to the problem we are meaning to solve. This is often the algorithm that comes to mind first, and even though it may be the least efficient, you’re guaranteed a solution.
  • Greedy algorithms — an algorithmic paradigm that builds up a solution piece by piece, meaning it chooses the next piece that offers the most obvious and immediate benefit.
  • Dynamic programming (mentioned above)
  • Divide and conquer (mentioned above)
  • Sorting and searching algorithms — mergesort, quicksort, selection sort, bubble sort, insertion sort
  • Graph algorithms — breadth first graph traversal, depth first graph traversal

How to approach a technical interview

  • Make sure you have the basics down.
  • Understand how to optimize your program with asymptotic analysis.
  • Be aware of the different algorithms available to you and the impact they have on complexity.

Sample problems to get you ready for an interview

Note: You can find an interactive coding playground and answers to these problems with the provided hyperlinks.

Asymptotic analysis: Compute the Big O complexity of the code snippet given below.

int main(){
   int n = 10;
   int sum = 0;
   float pie = 3.14;

   for (int i=1; i<n; i+=3){
     cout << pie << endl;
     for (int j=1; j<n; J+=2){
       sum += 1;
       cout << sum << endl;
     }
   }
}
Enter fullscreen mode Exit fullscreen mode

Sorting and searching algorithms: Implement a function that takes two sorted arrays of variable length and finds the median of the two arrays.

Graph algorithms: Implement a function that returns the number of nodes at a given level of an undirected graph.

Greedy algorithms: Given an infinite number of quarters (25 cents), dimes (10 cents), nickels (5 cents), and pennies (1 cent), implement a function to calculate the number of COINS to represent V cents.

Dynamic programming algorithms: A child is running up a staircase with n steps and can hop either 1 step, 2 steps, or 3 steps at a time. Implement a function to count the number of possible ways that the child can run up the stairs.

Divide and conquer algorithms: Given a 2D array of k rows and 44 sorted columns and an empty 1D output array of size k∗n, copy all the elements from k sorted arrays to the k∗n output array using a divide and conquer approach.


Wrap up

If you’re heading into a technical interview you must be ready to showcase your knowledge of the different algorithms at your disposal and understand the complexities associated with each. Become familiar with the algorithmic paradigms mentioned above (i.e. divide and conquer, brute force, greedy) and as the old adage goes, “practice makes perfect”, so be sure that you put in the time to practice implementing different algorithms and calculating their complexity as it’s something that developers must be aware of when they write code.

Here are a few steps you can take to ensure success at your next interview:

  • Familiarize yourself with different algorithms: Don’t just memorize solutions. Take the time to understand the patterns that underlie each algorithm and the approach you should take.

  • Do your homework: The more you practice the more comfortable you’ll feel. Study, prepare, practice. One great resource for study and practice is Educative's Algorithms and Complexity Analysis: an interview refresher


Additional resources

Article: The insider’s guide to recursion interview questions

Article: Big O Notation: A primer for beginning devs

Article: The Definitive Coding Interview Roadmap

Q&A: How can I prepare for interviews in any big software company?

Online Course: Grokking the Coding Interview

Article: 5 tried and true techniques to prepare for a coding interview

Top comments (3)

Collapse
 
sehardwick profile image
Steve

Fascinating stuff here, Amanda! Although I'm still at least a year away from thinking about changing fields, these are good techniques for me to learn anyway, as I think they will help me immensely as I'm learning to code!

Collapse
 
amandaeducative profile image
Amanda Fawcett

Glad to hear this is helpful article for you. Best of luck with your preparation process!

Collapse
 
calebcjh profile image
Caleb Chao

Your definition of Dynamic Programming is incorrect. You are describing Memoization.

For your example question for DP, counting the number of ways to ascend a flight of stairs, a recursive solution would be

f(n) = f(n - 1) + f(n - 2) + f(n - 3)

Memoization will avoid calculating f(n - 2) twice.

Dynamic Programming approaches the problem from the other end.

g(0) = 1
g(1) = 1
g(2) = g(1) + g(0) = 2
g(3) = g(2) + g(1) + g(0) = 3
...