Dynamic programming is an intimidating topic when it comes to interview preparation. I always fret it. But this time, I found an intuitive way of looking at it, thanks to Python.
First, let's understand the motivation for dynamic programming. Let's say you have a problem to solve. You realise that if you solve a subproblem(which is a smaller instance of the same problem), you can use the solution to that subproblem to solve the main problem. That's the crux of recursive algorithms. For example, you want to calculate the factorial of n
. Well, if only you had the factorial of n-1
, you could just multiply n
with that value, and you have your solution. 💯Alright now, let's focus on how to get the factorial of n-1
. Well, if only you had the factorial of n-2
, you could simply multiple n-1
with that value, and you have the solution. 💯💯. Now, how do we get the factorial of n-2
? If only you had... Ok, I'm going to stop. You get the point.. Here's the recursive solution for it -
def fact(n):
if n <= 1:
return 1
return fact(n-1) * n
So far, so good. Now, what's dynamic programming? It is merely an optimization over recursive solutions that becomes relevant when you have multiple calls to the recursive function for the same inputs. For example, let's take a look at the fibonnaci problem. The fibonacci formula is fib(n) = fib(n-1) + fib(n-2)
. Now, fib(5) = fib(4) + fib(3)
and fib(6) = fib(5) + fib(4)
. Both the calculations require the value of fib(4)
, and in the naive recursive implementation, they will be calculated twice. Hmm, that's not very efficient. Let's see if we can do better. Let's simply cache the solutions so that we don't recompute it.
solution_cache = {}
def fib(n):
# retrieve from cache if present
if n in solution_cache:
return solution_cache.get(n)
if n <= 1:
return n
value = fib(n-1) + fib(n-2)
# save to cache before returning
solution_cache[n] = value
return value
That's basically the essence of dynamic programming. Store the results of the subproblems, so that we don't have to recompute it later. Well, now we're more efficient, but we've sacrificed code readability. There's so much going on in the fib
function. Our earlier code was so much simpler. Let's take it up a notch by making the code cleaner with a decorator in Python.
def cache_solution(func):
solution_cache = {}
def cached_func(n):
# retrieve from cache if present
if n in solution_cache:
return solution_cache.get(n)
func_result = func(n)
# save to cache before returning
solution_cache[n] = func_result
return func_result
return cached_func
@cache_solution
def fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
What we've done here is separated the concerns of solving the main problem, i.e. calculating fibonacci number, and that of caching the solution of the subproblems. This makes it easier to come up with dynamic programming solutions. First, we can concentrate primarily on coming up with the recursive solution. After that, if we need to optimise it, we simply need to write a decorator!
As a result, the code is cleaner, and more extensible. How?
- The
fib
function follows the single responsibility principle of only dealing with the solution of calculating the Fibonacci number. If we need to alter the logic, it's much easier to do it. - We've created a reusable solution for caching the solutions of subproblems. Maybe you have another function
foo
whose solutions you need to cache. Just decorate it and you're done! - Now, let's say you want to use an external cache (maybe, redis). You may want to do this because you want a central cache for all your applications, or you want to persist cache to disk, or you want to employ a cache eviction policy. Now, all you have to do is make a change to the
cache_solution
decorator, to save the solution to redis (as opposed to a dictionary in local memory) and you're done. You can now use redis to cache the solutions for all the functions that use this decorator!
Top comments (2)
Nice article. The LRU cache works in the same way the decorator you created does?
Thanks :)
Well, if you're asking about redis, the cache eviction policy is something comes out of the box with redis. It's something you can configure for your redis server. So the decorator would merely have to deal with saving the key-value pair to redis.