What College Taught me About Dynamic Programming
Dynamic Programming in Simple Terms
One thing I love about Computer Science in college is that they teach you fundamental concepts that generalize the entire field.
One such difficult yet fundamental idea is Dynamic Programming (DP).
In this article, we explore what college taught me about DP, why it’s important, and how it allows us to develop extremely optimal algorithms.
So without further ado, lets jump right in!
What is Dynamic Programming?
At it’s core, Dynamic Programming (DP) is a problem-solving technique to solve a given problem by combining solutions to smaller subproblems.
A problem is solvable using DP when it has the following two properties:
Optimal Substructure
Optimal Substructure exists when there is a recurrence relation between optimal solutions to a problem and it’s subproblems.
For instance, let optimal(n)
be a function that computes the optimal solution for a problem of size n
.
There is an Optimal Substructure if and only if there is a recursive relation between optimal(n)
and optimal(n-i)
where i ≤ n
.
A common example is the Fibonacci recurrence relation:
fib(n) = fib(n - 1) + fib(n - 2)
Overlapping Subproblems
When breaking down a problem into subproblems, subproblems may overlap and share common solutions.
Here is the recursion tree from the Fibonacci algorithm for n = 4.
As you can see, F(2)
and F(0)
gets computed twice, while F(1)
is computed thrice.
This is very inefficient and yields a time complexity O(2ⁿ).
We can do better.
Optimizing Fibonacci with Dynamic Programming
Step 1: Find the Recurrence Relation
This is usually the most difficult part of any DP problem.
But in this case, we already know the recurrence relation as stated earlier:
fib(n) = fib(n - 1) + fib(n - 2)
If we implement this recurrence without DP in code, this is how it may look like:
As discussed earlier, the time complexity is O(2ⁿ) due to re-computation of the same subproblems.
To tackle this, we have two useful techniques to explore together.
Memoization
Memoization is the process of storing the solution of each subproblem to avoid recomputing solutions to the same subproblems.
Lets see how this looks in code:
How does it compare to the recursive solution?
- We use an array
memo
which will store the solution of each subproblem calculated. - We check if
memo[n]
exists, i.e. the solution is already stored, then return it without re-computation.
Bottom-up
Alternatively, the bottom-up approach works by:
- Initially solving base subproblems,
- Keep combining base subproblems until you solve the entire problem.
I will explain what I mean using the code below:
How does it compare to the memoization solution?
- We begin by solving the base subproblems,
fib[0]
andfib[1]
and storing these in an array calledfib
. - We iterate until
n
, combining the base subproblem solutions to solve the entire problem.
Both Bottom-up and and Memoization approaches boast an impressive time complexity of O(n).
This is much faster than O(2ⁿ), and will drastically optimize performance.
Here are some benchmarks showing the running times for Recursive, Memoization, and Bottom-up approaches.
Key Observations:
- Memoization and Bottom-up approaches perform roughly the same.
- The Recursive approach scales terribly as
n
increases. The change from n=40 → n=50 increases the running time by 3 minutes!
Conclusion
Dynamic programming is a fundamental concept that every software engineer should master.
Since DP is renowned as a difficult concept, I will continue to make articles walking through various dynamic programming problems, so stay tuned!
If you enjoyed this article, please make sure to Subscribe, Clap, Comment and Connect with me today! 🌐