DEV Community

Saikrishna-52
Saikrishna-52

Posted on

DYNAMIC PROGRAMMING

Alt Text
Dynamic programming, or DP, is an optimization technique. It is used in several fields, though this article focuses on its applications in the field of algorithms and computer programming. Its a topic often asked in algorithmic interviews.
In this post, we’ll discuss when we use DP, followed by its types and then finally work through an example.
When is DP used?
There are two necessary conditions a problem must satisfy for DP to work.
Overlapping Sub-problems
Optimal substructure

The two kinds of DP
The top-down (memoization) approach
In a top-down approach, we start from the highest level of our problem. In this approach, we initially check if have already solved the current sub-problem. If we have, we just return that value. If not, we solve that sub-problem. We use recursive calls to solve our sub-problem.
Since those calls require solving smaller sub-problems which we haven’t seen before, we continue this way, until we encounter a sub-problem we have either solved or know the answer to trivially.

The bottom-up (tabulation) approach
In this approach, we start at the very bottom and then work our way to the top. Since we start from the “base case”, and use our recurrence relation, we don’t really need recursion, and so, this approach is iterative.
The main difference between the two approaches is that bottom-up calculates all solutions, while top-down computes only those that are required. For example, to find the shortest path between source and destination, using the top-down approach, we only compute the distances with intermediate points near the shortest path, choosing the minimum at each stage.
On the other hand, in a bottom-up approach, we end up calculating the shortest distance between each point on the grid and the destination, finally returning the shortest distance from the start to the end.
As a comparison, let's look at a possible top-down and bottom-up function that returns the nth Fibonacci
TOPDOWN APPROACH

int fibonacci_top_down_rec(int n, vector <int> &dp)
{
    // return pre-computed value if it exists
    if (dp[n] != -1) return dp[n];    
    // else calculate it using our recurrence relation
    else
     {
        // store the computed result so we don't have
        // to solve this sub-problem again
        dp[n] = fibonacci_top_down_rec(n - 1, dp)
                + fibonacci_top_down_rec(n - 2, dp);
              // return the stored value
        return dp[n];
    }
}
int fibonacci_top_down(int n)
{
    // vector of size n+1 with every element initialized to -1
    vector <int> dp (n + 1, -1);
    // the base case. Pre-computed values for indices
    // less than 2
    dp[0] = 0;
    dp[1] = 1;
      return fibonacci_top_down_rec(n, dp);
}
Enter fullscreen mode Exit fullscreen mode

BOTTOM DOWN APPROACH



int fibonacci_bottom_up (int n)
{
    // define a vector dp of size n + 1
    // and every element is 0
    vector <int> dp (n + 1, 0);
    dp[0] = 0;
    dp[1] = 1;
   // loop over each index of dp starting after the base case
    for(int i = 2; i <= n; i ++)
        dp[i] = dp[i - 1] + dp[i - 2];
     // return the nth fibonacci term
    return dp[n];
}
Enter fullscreen mode Exit fullscreen mode

While both approaches have the same asymptotic time complexities, the recursive calls in a top-down implementation may lead to a stack overflow, which is a non-issue owing to the iterative nature of the bottom-up approach.
TIME COMPLEXITY
image

Comparing the recursive approach with our top-down approach, it's clear that we are trading space complexity for better time complexity. Of course, since both are recursive, they have the additional space required for the recursive call stack.

Divide and Conquer Method vs Dynamic Programming
Divide and Conquer Method

1.It deals (involves) three steps at each level of recursion:
Divide the problem into a number of subproblems.
Conquer the subproblems by solving them recursively.
Combine the solution to the subproblems into the solution for original subproblems.

2.It does more work on subproblems and hence has more time consumption.

3.In this subproblems are independent of each other.

4.It is a top-down approach.

  1. For example: Merge Sort & Binary Search etc.

Dynamic Programming

1.It involves the sequence of four steps:
Characterize the structure of optimal solutions.
Recursively defines the values of optimal solutions.
Compute the value of optimal solutions in a Bottom-up minimum.
Construct an Optimal Solution from computed information.

2.It solves subproblems only once and then stores in the table.

3.In this subproblems are interdependent.

4.It is non Recursive.

5.For example: Matrix Multiplication.

Conclusion
Dynamic Programming is not often very intuitive or straightforward. Then again, most complex things aren’t. But things do get easier with practice. There are tonnes of dynamic programming practise problems online, which should help you get better at knowing when to apply dynamic programming, and how to apply it better. Hopefully, this post served as a good starting point.

Top comments (0)