Contents

Dynamic programming works by splitting main problem into sub problems. Then, find optimal solutions to the sub problems. When the solution to sub problem is found, cache that result, and re-use it when a overlapping sub problem is occurred.

When a problem's solution depends on it's sub problem's solution, you can solve such problems using recursion, but often the draw back with recursion is that it occupies more memory to store function variables and constants into stack, as these values are kept in stack until the function returned output back to its parent function call. There is a limitation on the stack memory in the system, so it makes less efficient if the call stack is large.

We can apply dynamic programming technique if the sub problems are overlapping, there are other problems such as binary search will fall into similar pattern, that you reduce input scope in each iteration, but in each iteration input is unique and there is no overlapping of input.

An example where we can apply dynamic programming technique is calculating Fibonacci sequence of a number.

For example, say we want to compute fibonacci(5), then the decision tree will look like below diagram.

Fibonacci recursive decision tree

If we have to compute fibonacci sequence without dynamic programming, time complexity will be exponential O (2n) as there are two sub problems at every step n-1 and n-2 and the height of the decision tree will be n.

If you notice above diagram, there are some overlapping problems, for example fib(5) = fib(4) + fib(3) and fib(4) = fib(3) + fib(2) in these two statements fib(3) is an overlapping sub problem. So, it is a waste of time to recompute the same sub problem again, and dynamic programming technique helps us here. i.e. we can cache these sub problem results and re-use them.

Overlapping sub problem

Dynamic programming can be achieved using two ways.

When you find a problem where you see the problem can be split into sub problems and solve them independently. First, try to come up with brute-force approach, then observe if there are any overlapping sub problems, then optimize brute-force solution by using some mechanism of caching using memoization, further more try to come up with tabulation approach and then look for any memory improvements.

We will look into solving Fibonacci problem using 4 different solutions, notice that we have come up with most optimal solution.

Let's implement solution to Fibonacci sequene using a simple recursion approach.

public class Fibonacci { static int fibFunctionUsingBruteForceApproach(int n) { if(n==0 || n==1) { return 1; } return fibFunctionUsingBruteForceApproach(n-1) + fibFunctionUsingBruteForceApproach(n-2); } }
Complexity Analysis:

Time complexity: Above solution runs in O(2n) where n input number.
Space complexity: O(n), since we are calling the function recursively and this is the amount of memory used by stack.

In this solution, we are going to apply dynamic programing (memoization) technique, that is cache sub problem results and re-use them.

import java.util.HashMap; public class Fibonacci { static int fibUsingMemoization(int n) { HashMap<Integer, Integer> cache = new HashMap<>(); return fibUsingMemoizationRecursive(n, cache); } static int fibUsingMemoizationRecursive(int n, HashMap<Integer, Integer> cache) { if(n ==0 || n ==1) { // base cases return 1; } if(cache.containsKey(n)) { return cache.get(n); } int result = fibUsingMemoizationRecursive(n-1, cache) + fibUsingMemoizationRecursive(n-2, cache); cache.put(n, result); return result; } }
Complexity Analysis:

Time complexity: After applying Dynamic programming memoization technique, we have brought down run time to O(n) where n input number.
Space complexity: O(n) as we are making recursive loops n times plus the cache size at most will be n.

In this solution, we are going to apply dynamic programing (tabulation) technique, that is cache sub problem results and re-use them.

import java.util.HashMap; public class Fibonacci { static int fibonacciUsingTabulation(int n) { int[] cache = new int[n+1]; cache[0] = 1; cache[1] = 1; for(int i=2; i<=n; i++) { cache[i] = cache[i-1] + cache[i-2]; } return cache[n]; } }
Complexity Analysis:

Time complexity: Tabulation approach runs in O(n) where n input number.
Space complexity: O(n), since we are storing computed values into a temporary array which is of size n.

If you notice above tabulation approach, it takes relatively less memory compared memoization technique, because this approach does not use recursion stack.

For illustration purpose, we have looked at both memoization and tabulation techniques. But, the tabulation approach can be further optimized in terms of memory. If you notice the tabulation approach, when calculating fibonacci values, we just need previous two values only, so we can cache just previous two values.

static int fibonacciOptimized(int n) { int prev1 = 1; int prev2 = 1; for(int i=2; i<=n; i++) { int currentFib = prev1 + prev2; prev1 = prev2; prev2 = currentFib; } return prev2; }
Complexity Analysis:

Time complexity: Above approach runs in O(n) where n input number.
Space complexity: O(1), since we are storing previous two values.

Above discussed examples source code can be found at GitHub link for Java code