dynamic programming is a powerful design technique that combines the correctness of brute force and the efficiency of greedy algorithms there are two common use cases for it finding an optimal solution for example we may want to find a minimum or a maximum answer to some question or counting the total number of solutions that some problem may have these are not the only types of problems that DP solves but they're the most common the method was invented by Richard Bellman in the 1950s dynamic programming is somewhat of a weird name so don't look too much into it fun fact is that Bowman invented this name because he wanted to hide the fact that he was doing mathematical research the reason he was hiding this is because the Secretary of Defense in Washington had a pathological fear and hatred for the word research using the term dynamic programming made it hard for anyone to express this approval as it was quite generic dynamic programming appears in many interview questions and is generally seen as a difficult topic however if you understand the fundamentals and solving of problems you will get good at solving them and who knows maybe you'll even enjoy them dynamic programming is about identifying and solving all sub problems while the general idea is simple identifying what's a problem to solve can sometimes be hard good news is that the human brain is generally good at pattern recognition so the more problems we solve the better we become at identifying what sub problems to solve quickly we will look at four problems in this video starting with the basics future videos will cover more dynamic programming problems so don't forget to subscribe to get notified let's start with a classic Fibonacci problem we want to write a function feeblevan that Returns the nth Fibonacci number the first and the second Fibonacci numbers are one to generate the next number we sum the previous two for example the sixth Fibonacci number is eight we calculate it as the sum of the previous two numbers which in this case are 3 and 5. let's write a naive recursive solution to solve this problem the base cases for the recursion are 1 and 2. otherwise we return the sum of the two previous Fibonacci numbers this is a very simple solution but it has a problem let's print the seventh Fibonacci number we get 13. but what happens if we try to print the 50th Fibonacci number it's very slow let's see why by visualizing how this function is executed to calculate fiber 5 we need to compute Vivo 4 and then to calculate Vivo 4 we need to compute Vivo 3. and to compute Vivo 3 we need to compute fibof2 finally we know that fibub2 is the base case so we return 1. we continue calculating V of 3 which requires no fee of one that's also one and we finally can say that V of 3 is 2. we go back to calculating fibo4 now we need field of 2 again so this is one and the sum is three so Vivo 4 is 3 now now we go back to calculating Viva 5 and we need the second part of the equation which is Vivo 3 and to calculate 3.3 we go through the same process again so we need fibof2 and fibof1 now we can sum these and get two and finally we can calculate Vivo 5 which gives us the result five as you can see this was a very long process and a lot of iterations just to get to the result for feedback 5. can you tell me what the time complexity of this function is we can assume that the number is 15 a word which means additions are fast let's say the running time is T of n which is T of n minus 1 plus T of n minus 2 plus o of 1. constant is for additions and other simple operations that we run in this function the time complexity is the same as the nth Fibonacci number which is the golden ratio to the nth power meaning it's very slow another way to look into this is to see that t of n is at least 2 times T of n minus 2 because the bigger the N is the more work we need to do this means that the time complexity doubles for each recursive call and we are subtracting 2 until we get to the base case we can subtract this half n times so the complexity of this function is at least of 2 to the power of n over 2. notice that we are doing the same computations multiple times for example we have already computed fibo3 but we are repeating the computation later we should only do it once there is a general approach to improve algorithms like this one and this approach is called memorization which means remember in Latin let's go back to our implementation the idea is to remember the computation of each Fibonacci number and compute it at most once we can do this by storing the Computing value in the dictionary let's call it memo memo is initially empty next time we want to return the nth Fibonacci number we first check if it's in the dictionary and if so we don't need to run the whole computation the computation is replaced by a single Dictionary lookup notice that the memoized solution is almost identical to the naive solution this means that you can convert the naive approach to memorize the approach for any recursive algorithm in general let's see how this algorithm runs with memorization if we run fibo50 now we get the result very quickly so can you tell me what is the running time complexity of the memoized solution well notice that the recursion is rare in this case the function we'll call the recursion only the first time for each value n next time it will retrieve the value from the dictionary which is fast this means that the number of non-memois calls is n each non-memois call ignoring the recursion part takes all one to compute this is why the total running time complexity is linear and this is the fundamental idea of dynamic programming remember the solutions and try to reuse them to solve bigger problems this is why the biggest challenge when designing a dynamic programming solution is to figure out what are the sub problems that can help you solve the actual problem so to summarize in this implementation we started from the final problem and recursively computed the sub problems when needed another way to look at dynamic programming problems is to start from the bottom up which means computed the sub problems first most people prefer this approach because it doesn't have function calls which is more efficient in practice it also allows us to save memory in some cases for example to compute Fibonacci we only need to remember the last two values so we can delete the value if we know we won't need it anymore to save space notice that the solution is the same as the memorized approach except that we are storing the computed values in the dictionary and iterating over sub problems with a for Loop instead of using recursion when we compute the ice Fibonacci number we are assuming that the previous two are already computed this means that we need to put a bit of thought into the order in which we solve sub-problems it's important to solve dependencies of a subproblem first therefore we have to solve the sub problems in the topological sort order if you imagine each sub-problem being a node and the edge is representing dependencies between sub problems the dependency lag for Fibonacci sub problems is very simple nth number depends on the N minus 1 and N minus 2 and so on it's usually obvious in what order we need to solve some problems but the sub problem dependencies must not form a cycle because it would be impossible to start them in the topological order let's move on to the next problem given a set of coin values C1 to CK and the target amount of money m what's the minimum number of coins that can produce the given amount for example let's say that we want to give a customer 734 cents in change using euro coins possible euro coins are 1 2 5 10 20 50 100 and 200 cents the optimal solution is to select three coins of 200 one coin or 100 one coin of 20 one coin of ten and two coins of two cents do you notice anything interesting here well we're always choosing the largest possible coin until we get to the Target amount this is called a greedia approach because we're choosing the greediest option every time and it turns out this works for euro coins however it's not immediately obvious that this approach works and actually doesn't work in the general case for example imagine that the coins are worth one four and five cents and the target sum is 13. the greedy approach would produce two coins of five cents and three coins of one cent which is five coins in total however the optimal solution is to use two coins worth four cents each and one coin worth 5 cents can we use dynamic programming to solve this problem if so what sub problems do we need to solve it's usually a good idea to consider the final question you're trying to answer to also be the sub problem for dynamic programming in this problem the question is what's the minimum number of coins that can produce the sum M if this was our sub problem what would be a recursive formulation to solve it let's say that the function minimum coins represents the minimum number of coins required for a sum m clearly our base case would be minimum coins of zero equals zero because no coins are required to form an empty sum now let's try to solve a general sub problem minimum coins of M assuming that m is greater than zero remember when I said that dynamic programming is a bit like a Brute Force if we were to implement a Brute Force solution how would we do it if we want to get to the sum 13 we can do that by choosing any of the coins if we chose coin 5 then we have one coin and we're left with solving the same problem for some 8. similarly if we choose 0.4 we have to solve the same problem for some nine and if we choose coin one then we have to solve the problem for sum 12. we will keep doing this until we get to the initial position which is our base case Zero however notice that we can't go below zero so we will only choose coins that leads to possible States for each node in the tree we try all the possible coins and take the best solution with this in mind we can Define the recursive solution as minimum coins of M is zero if m is zero or we take the best solution after choosing all possible coins c for M greater than zero let's try to implement this using the naive recursive approach first let's write a function minimum coins the base case requires no coins while in other cases we wanted a minimum across all sub problems plus an extra coin to get to m negative subproblems don't exist so we ignore those we're using the mean ignore none helper function to ignore non-values because some sub problems may not have a solution let's run this function for some 13 and coins 1 4 and 5. we get the result 3 because we can form 13 as 4 plus 4 plus 5. what happens if we run it for the sum of say 150 similar to the recursive Fibonacci this is very slow luckily we know about memorization so we can use this trick to speed up our solution let's store the computation in the memo dictionary and return it if it's already computed this solution is much faster and Returns the value 30 immediately the reason this function is efficient is because the answer for each sub problem is calculated recursively only once after it was computed once it can be efficiently retrieved the next time the function is called for the same parameter the time complexities of M times K where K is the number of coins and M is the target sum this is because we iterate or each coin to find the best solution for its sub problem now let's provide a bottom-up solution using a loop to calculate the solution for all the sums up to m first we store the solution for the base case which is zero then iterate over each subproblem and each coin type ignore the negative sub problems and keep the best solution notice that we are calling the function get on the dictionary which returns none instead of raising an error when the key is missing both memorization and the bottom-up approach are good Solutions the bottom-up approach has a lower constant factor and it's slightly shorter which is why I personally prefer it from now on I will only use the bottom-up approach in the final Solutions nevertheless it is often easier to think about dynamic programming problems in terms of recursive functions so the thinking process will remain the same now let's see a variant of this problem instead of finding the minimum number of coins to form a sum m let's find in how many ways can we form the given sum m for example given coins 1 4 and 5 in how many ways can we form a sum of five there are four ways in which we can do that using all ones using one and four and vice versa and a single coin five let's try to construct a recursive solution choosing a coin 5 takes us to zero zero is the starting point so it's a good candidate for the base case the question is what is the answer for the base case in how many ways can we construct zero well our only option is to choose no coins so that's one way in which we can form the zero this path represents the choice of a single coin with value 5. another option is to choose going 4 which takes us to the sub problem 1. in how many ways can we construct one to find the answer we solve the sub problem for one we repeat this process for every coin in the final solution we want to know in how many ways we were able to reach zero we can take the sum of solutions across all sub problems to get a final answer let's go and implement the bottom-up solution note that this is going to be very similar to the previous problem the only differences are the base case and taking the sum instead of the minimum let's run this Solution on the example we saw earlier before finishing this video let's look at One More Counting problem imagine an N by m grid a rabbit is in the top left corner and wants to go to the bottom right corner it can only go down or write how many ways can it reach the bottom right corner for example for a 2x3 grid there are three ways in which the rabbit can get to the bottom right corner right right down write it down right and the down right right again let's start in the exact same way as before by defining a sub problem our goal is to find in how many ways the rabbit can get to the bottom right corner given the grid n by m so let's use that for the sub problem it's helpful to visualize the problem so let's look at a bigger 6x9 grid if the rabbit moves down then we are reducing the problem to a five by nine grid because it can go back similarly if the rabbit moves right then we are further reducing the problem to a five by eight grid since the rabbit can only move down or right we are always reducing the problem to a smaller grid notice that when the rabbit gets to the last row or column it can only keep moving in One Direction until it reaches the bottom right corner once the rabbit gets to the bottom right corner it has no more moves left it's often useful to think about Corner cases like this one when trying to construct a base case in this case we can say that the rabbit has only one way to get to the bottom right corner for a grid of size one by one which is by not moving at all okay so let's go back and try to construct a recursive solution if we want to solve the problem for a four by six grade in how many ways can we do that since the rabbit can move down or to the right we need to take into account both options if the rabbit moves down then the number of solution is the same as for the grid three by six and if it moves to the right then the number of solutions is the same as for the grid 4x5 since it can go in either direction we can take the sum of both sub problems also notice that our formula doesn't quite work when one of the coordinates is one because it requires the solution for the grid of size 0 which we haven't defined one way we can solve this is by expanding on our base case for example we can say that the grid with at least one dimension of one is the base case because we already know that there is only one way to get to the bottom right corner once the rabbit gets to the last row or column let's try to convert this into code we want to populate the memo dictionary such that memov and M gives us the solution let's initialize the base cases first by iterating over all grades with at least one dimension of one we can do this with two for loops after initializing the base cases let's populate the memo dictionary for all sizes using our recursive formula we start from the size 2x2 because the grids including size of 1 are base cases finally we return the solution for n by m how many paths do you think exist for an 18 by 6 grid what about 75 by 19. okay so we have solved a couple of basic dynamic programming problems at this point in the future videos we will tackle more complex ones I'd like to finish this video with a couple of important points that you should take out from this video when you're trying to solve dynamic programming problems start by defining the sub problem for simpler problems sub problems usually have the same form as the actual problem after defining a sub problem try to figure out what is the base case finally try to come up with a recursive formula that solves a generic case and remember visualization is your friend here don't forget to subscribe to get notified about the future dynamic programming videos also if you've enjoyed this video please hit the like button because it will tell me that it's worth making content like this and see you in the next one