DEV Community

Hugo
Hugo

Posted on

What is Dynamic Programming?

Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming.
The idea is to simply store the results of subproblems, so that we do not have to re-compute them when needed later.

This simple optimization reduces time complexities from exponential to linear.

For example, if we write a simple recursive solution for Fibonacci, we get exponential time complexity and if we optimize it by storing solutions of sub-problems, time complexity reduces to linear.
 

If you're not familiar with Big O Notation, I'll be writing a post eventually. But simply put, Big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

 

In other words:

Dynamic Programming is simply an optimization technique using Cache, to store previously calculated values and return these values whenever they're needed, without having to re-calculate them again.
We save these values in Cache (usually called memo), using an Hashmap data structure, for fast access.


Types of Dynamic Programming

It's important to note that there's two approaches to dynamic programming. Memoization and Tabulation.

One of the most used examples when introducing Dynamic Programming and recursion is the Fibonnaci sequence.

Take a look at the image below:

fibonnaci sequence
 
In this image we're calculating the Fibonnaci of 5.
The highlighted values represent repeated calculations, which we can store in cache whenever when we calculate them for the first time, this will ensure we not re-calculate them again, whenever these values are needed.
 

This is a simple example. However, to give you a broader view of the great improvement of Dynamic Programming, imagine we needed to calculate the Fibonnaci of 30, here's how many calculations we would do:

Without DP: 2692537 calculations.
With DP: 59 calculations.


A pretty mind-blowing difference, don't you think?

Let's see how we can implement DP by improving the recursive Fibonnaci solution.

Recursive Fibbonaci

function fib(n) {
    if (n < 2) {
        return n;
    } else {
        return fib(n-1) + fib(n-2)
    }
}
// Time Complexity - O(2^n)
Enter fullscreen mode Exit fullscreen mode

Fibbonaci with Memoization

function dynamicFib(n, cache) {
    let cache = cache || {};
    if(cache[n]) return cache[n]

    // base case
    if (n < 2) return n;

    return cache[n] = dynamicFib(n-1, cache[n]) + dynamicFib(n-2, cache[n])
}
// Time Complexity - O(n)
Enter fullscreen mode Exit fullscreen mode

To practice your new gained knowledge, I encourage you to try solving the following leetcode problem: Climbing Stairs

Top comments (2)

Collapse
 
tomasra2000 profile image
Tomasra2000

That is the most explicit explanation I've seen.

Collapse
 
hugos profile image
Hugo

Thank you!