Transcript for:
Understanding Dynamic Programming Concepts

the following content is provided under a Creative Commons license your support will help MIT open courseware continue to offer highquality educational resources for free to make a donation or view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu we start a brand new exciting topic dynamic programming yeah so exciting actually I am really excited because d programming is my favorite thing in the world in algorithms and uh it's going to be the next four lectures it's so exciting uh it has lots of different facets it's a very general powerful design Technique we don't talk a lot about algorithm design in this class but dynamic programming is one that's so important and also takes a little while to settle in we we like to inject it into you now in6 so in general our motivation is designing new algorithms and dynamic programming also called DP is a great way or a very general powerful way to do this um it's especially good and intended for optimization problems things like shortest paths you want to find the best way to do something shortest paths you want to find the shortest path the minimum length path if you want to minimize maximize something that's an optimization problem and typically good algorithms to solve them involve dynamic programming it's bit of a broad statement uh you can also think of dynamic programming as a kind of exhaustive search uh which is usually a bad thing to do because it leads to exponential time but if you do it in a clever way by dynamic programming you typically get Pol polinomial time so one perspective is that dynamic programming is approximately uh careful Brute Force It's Kind of a Funny combination bit of an oxymoron but we take the idea of Brute Force which is try all possibilities and you do it carefully and you get it to polinomial time there are a lot of problems where essentially the only known polinomial time algorithm is via dynamic programming it doesn't always work there's some problems where we don't think there are polinomial time algorithms but when it's possible DP is a nice sort of General approach to it um and we're going to be talking a lot about dynamic programming it has a lot of different there's a lot of different ways to think about it we'll look at a few today we're going to warm up today with some fairly easy problems that we already know how to solve namely Computing Fibonacci numbers which is pretty easy and Computing shortest paths and then in the next three lectures we're going to get to more interesting examples where uh it's pretty surprising that you can even solve the problem in polinomial time probably the first burning question on your mind though is why is it called dynamic programming what ises that even mean and I used to have this spiel about uh well you know programming refers to the I think it's the British notion of the word where it's about optimization optimization in American English is something like programming in British English where uh you know you want to set up the program the schedule for your trains or something I think is where programming comes from originally but I looked up the actual history of why is it called dynamic programming um dynamic programming is invented by a guy named uh Richard Bellman you may have heard of bellman in the Bellman Ford algorithm so this is actually the precursor to belman Ford and we're going to see belman Ford come up naturally in this setting um so here's here's a quote about him says bman explained that he invented the name dynamic programming to hide the fact that he was doing mathematical research at he was working at this place called Rand and uh under a secretary defense who had a pathological fear and hatred for the term research uh so he settled on the term dynamic programming because it would Yi it would be difficult to give a pejorative meaning to it and because it was something not even a congressman could object to basically it sounded cool so uh that's that's the origin of the name dynamic programming so why is it called that who knows I mean now you know but it's not it's a weird term just it for what it is it may make sense some kind of sense but all right so we are going to start with this example of how to compute Fibonacci numbers and maybe before we actually start I'm going to give you a a sneak peek of what you can think of dynamic programming as uh and this this equation so to speak is going to change throughout today's lecture and the end we'll settle on a sort of more accurate perspective the basic idea of dynamic programming is to take a problem split it into sub problems solve those sub problems and reuse the solutions to your sub problems it's it's like a lesson in recycling Okay so so we'll see that uh in Fibonacci numbers so you remember Fibonacci numbers right number of rabbits you have on Day N if they reproduce we've we've mentioned them before we were talking about a trees I think uh so this is a the usual uh you can think of it as a recursive definition or recurrence on Fibonacci numbers it's a definition of what the nth Fibonacci number is so let's suppose our goal an algorithmic problem is compute the M Fibonacci number and I'm going to assume here that that fits in a word and so basic arithmetic addition whatever is constant time prop operation so how do we do it you all know how to do it many ways but I'm going to give you the dynamic programming perspective on things so this will seem kind of obvious but it is a ex we're going to apply exactly the same principles that we will apply over and over in dynamic programming but here it's in a very familiar setting so we're going to start with the naive recursive algorithm and that is if you want to compute the N Fibonacci number you check whether you're in the base case I'm going to write it this way so f is just our return value you'll see why I write it this way in a moment then you return F uh in the base case it's one otherwise you recursively call Fibonacci of n minus one you recursively call Fibonacci of n minus two add them together return that this is a correct algorithm is it a good algorithm no it's very bad exponential time how do we know it's exponential time other than from experience uh well we can write the running time as recurrence T of n represents the time to commute the N Fibonacci number how can I write the recurrence good throwback to the early lectures divide and conquer I hear Whispers yeah yeah t of nus one plus plus T of nus 2 plus constant I don't know how many you have by now so to compute the N Fibonacci number we have to compute the N minus first Fibonacci number and the N minus second Fibonacci number that's these two recursions uh and then we take constant time otherwise we do constant number of additions comparisons you know return all these operations take constant time so that's a recurrence how do we solve this recurrence well one way is to see this is Fibonacci recurrence so it's the same thing there's is plus whatever but in particular this is at least the N Fibonacci number and if you know Fibonacci stuff that's about uh the golden ratio to the nth power which is bad uh we had a similar recurrence in AVL trees and so another way to solve it it's just good review say oh well uh that's at least least 2 * T of nus 2 it's going to be monotone the bigger n is the more work you have to do because to do the n thing you have to do the n minus first thing so we could just reduce n t of n minus1 to T of n minus 2 that will give us a lower bound and uh now these two terms now this is sort of an easy thing you see that you're multiplying by two each time you're subtracting two from n each time how many times can I subtract two from n n/ two times before I get down to a constant and so this is equal to 2 the N / 2 I mean times some constant which is what you get in the base case so I guess I should say Theta this thing is Theta that okay so it's at least that big and the right constant is uh Fe and the base of the exponent Okay so that's a bad algorithm we all know it's a bad algorithm uh but I'm going to give you a general approach for making bad algorithms like this good and that General approach is called memorization we'll go over here and this is a technique of dynamic programming so I'm going to call this the memorized dynamic programming algorithm so um did I settle on using memo in the notes yeah the idea is simple whenever we compute a Fibonacci number we put it in a dictionary if it's and then when we need to compute the nth Fibonacci number we check is it already in the dictionary do we we already solved this problem if so return that answer otherwise compute it okay you'll see the transformation is very simple okay uh these two lines are identical to these two lines okay so you can see how the transformation Works in general you can do this with any recursive algorithm the memorization transformation on that algorithm which is we initially make an empty dictionary called memo and before we actually do the computation we say well check whether this this version of the Fibonacci problem Computing F of n uh is already in our dictionary so if that key is already in the dictionary we return the corresponding value in the dictionary so uh and then once we've computed the nth Fibonacci number if we bothered to do this if this didn't apply then we store it in the memo table so we store we say well if you ever need to compute F ofn again here it is and then we return that value okay so this is a general procedure can apply to any recursive algorithm uh with no side effects I guess technically uh and turns out this makes the algorithm efficient now there's a lot of ways to see why it's efficient um in general maybe it's helpful to think about the recursion free so if you want to compute FN in the old algorithm we compute FN minus one and FN minus 2 completely separately to compute FN minus one we compute FN minus 2 and FN minus 3 to compute FN minus 2 we compute FN minus 3 and FN minus 4 and so on and you can sort of see why that's exponential in N because we're only decrementing n by one or two each time but then you observe hey these FN minus 3s are the same I should really only have to compute them once and that's what we're doing here the first time you call FN minus 3 you do work but once it's done and you go over to this other recursive call this will just get cut off there's no tree here here we might have some recursive calling here we won't because it's already in the memo table okay in fact this already happens with FN minus 2 this whole this whole tree disappears because FN minus 2 has already been done okay so it's clear why it improves things um so in fact you can you can argue that this call will be free because you already did the work in here but I I want to give you a very particular way thinking about why this is efficient uh which is following so you could write down a recurrence for the running time here but in some sense recurrences aren't quite the right way of thinking about this because recursion is kind of a rare thing if you're calling Fibonacci of some value K you're only going to make recursive calls the first time you call Fibonacci FK because henceforth it's been the you've put it in the memo table you will not recurse so you can think of uh there being two versions of calling Fibonacci of K there's the first time which is the non- memorized version that does recursion does some work and then every time henceforth you're you're doing memorized calls with Fibonacci of K and those cost constant time so the memorized calls constant time so we can think of them as basically free that when you call Fibonacci of n minus 2 because that's a memorized call it's you really doesn't pay anything for it I mean you're already paying constant time to do addition and whatever uh so you don't have to worry about the time there's no recursion here okay and then what we care about is that the number of non- memorized calls which is the first time you call Fibonacci of K is n no Theta is even necessary we are going to call Fibonacci of one at some point we're going to call Fibonacci of two at some point and the original call is Fibonacci of n all of those things will be called at some point that's pretty easy to see but in particular certainly at most this we never call fibon of n plus one to compute Fibonacci of n so it's at most n calls indeed it will be exactly n calls that are not memorized those ones we have to pay for how much do we have to pay well if you don't count the recursion which is what this recurrence does if you ignore recursion then the total amount of work done here is constant okay so I will say the nonrecursive work per call is Conant and therefore I claim that the running time is constant oh sorry is linear constant would be pretty amazing this is actually not the best algorithm and aside the best algorithm for computing the nth Fibonacci number uses log n uh arithmetic operations so you can do better but if you want to see that you should take 6046 okay we're just going to get to linear today which is a lot better than x itial uh so why linear because there's n non- memorized calls and each of them cost constant so it's the product of those two numbers okay this is an important idea and so important I am going to write it down again uh in a slightly more General framework in general in Dynamic program programming I didn't say why it's called memorization the idea is you have this memo pad where you write down all your scratch work that's this memo dictionary and to memoize is to write down on your memo pad I didn't make it up another crazy term it means remember and then you remember all the solutions that you've done and then you reuse those Solutions now these Solutions are not really a solution to the problem that I care about the problem I care about is Computing the nth Fibonacci number to get there I had to compute other Fibonacci numbers why uh you know because I had a recursive formulation this is not always the way to solve a problem but uh usually when you're solving something you can split it into Parts into sub problems we call them they're not always of the same flavor as your original goal problem but there's some kind of related parts and this is the the big challenge in designing a dynamic program is to figure out what are the sub problems let's say always the first thing I want to know about a dynamic program is what are the sub problems somehow they are designed to help solve your actual problem and the idea of memorization is once you solve a sub problem write down the answer if you ever need to solve that same sub problem again you reuse the answer so that is the core idea and so in this sense dynamic programming is essentially recursion plus memorization and so in this case these are the sub problems Fibonacci of 1 through Fibonacci of n the one we care about is Fibonacci of n but to get there we solve these other sub problems uh in all cases if this is the situation so for any Dynamic program the running time is going to equal to the number of different sub problems you might have to solve or that you do solve times the amount of time you spend per sub problem okay in this situation we had N Sub problems and for each of them we spent constant time and when I measure the time per sub problem which in the Fibonacci case I claim is constant I ignore recursive calls that's the key we don't have to solve recurrences with dynamic programming yay no recurrence is necessary okay don't count recursions obviously don't count memorized recursions the reason is I only need to count them once after the first time I do it it's free so I count how many different sub problems do I need to do these are the going to be the expensive recursions where I do work I do some amount of work but I don't count the recursions because otherwise I'd be double counting I only want to cut count each sub problem once and then this will solve it so a simple idea in general dynamic programming is super simple idea it's nothing fancy it's basically just memorization there is one extra trick we're going to pull out uh but that's the idea all right let me tell you another perspective this is the one maybe most commonly taught is to think of I'm not a particular fan of it I really like memoization I think it's a simple idea and as long as you remember this formula here it's really easy to work with Okay but some people like to think of it this way and so you can pick whichever way you find most intuitive uh instead of thinking of a recursive algorithm which in some sense starts at the top of the big of what you want to solve and works its way down you can do the reverse first you can start at the bottom and work your way up and this is probably how you normally think about Computing Fibonacci numbers or how you learned it before I'm going to write it in a slightly funny way the point I want to make is that the transformation I'm doing from the naive recursive algorithm to the memorized algorithm to the bottom up algorithm is completely automated I'm not thinking I'm just doing okay it's easy this code is exactly the same as this code and is that code except I replaced n by K just because I need a couple of different n values here or I want to iterate Over N values and then there's this stuff around that code which is just formulaic a little bit of thought goes into this for Loop but that's it okay this does exactly the same thing as the memorized algorithm uh maybe takes a little bit of thinking to realize if you unroll all the recursion that's happening here and just write it out quenti this is exactly what's happening okay this code does exactly the same additions exactly the same computations as this the only difference is how you get there here we're using a loop here we're using recursion but the same things happen in the same order there really no difference between the code this code's probably going to be more efficient in practice because you don't make function calls so much in fact I mistake made a little mistake here this not a function call it's just a lookup into a table here I'm using a hash table to be simple but of course you could use an array okay um but they're both constant time uh with good hashing all right so is it clear what this is doing I think so uh I think I made a little typo so we have to compute another typo we have to compute F1 up to FN which in Python is that and you know we comput it exactly how we used to except now instead of recursing I know that when I'm Computing the Ki Fibonacci number man so many typos guys are laughing uh when I compute the Ki Fibonacci number I know that I've already computed the previous two why because I'm doing them in increasing order nothing fancy then I can just do this and the solutions will just be waiting there if they weren't I'd get a key error so I'd know that there's a bug but in fact I won't get a key I will have always computed these things already then I store it in my table then I iterate eventually I've solved all the sub problems F1 through FN and the one I cared about was the nth one okay so straightforward I'm do this because I don't really want to have to go through this transformation for every single problem we do I'm doing it in Fibonacci because it's super easy to write the code out explicitly but you can do it for all of the dynamic programs that we cover uh in the next four lectures okay I'm going to give you now the general case this was the special Fibonacci version in general the bottom up does exactly the same computation as the memorized version and what we're doing is actually a topological sort of the sub problem dependency dag so uh in this case the dependency dag is very simple in order to compute uh I'll do it backwards in order to compute FN I need to know FN minus1 and FN minus 2 if I can do if I know those I can compute FN then there's FN minus 3 which is necessary to compute this one and that one and so on so you see what this dag looks like now I've drawn it conveniently so all the edges go left to right so this is a topological order from left to right and so I just need to do F1 F2 up to FN in order okay usually it's totally obvious what order to solve the sub problems in but in general what we what you should have in is that we are doing a topological sort here we just did it on our heads because it's so easy and usually it's so easy it's just a for Loop nothing fancy all right sing an arrow all right let's do something a little more interesting shall we uh all right one thing you can do from this bottom up perspective is you can save space storage space in your algorithm we don't usually worry about space in this class but it matters in in reality so here we're building a table size n but in fact we really only need to remember the last two values so you could just store the last two values and each time you make a new one delete the oldest so by thinking a little bit here you realize you only need constant space still linear time but constant space and that's a that's often the case from the bottom up perspective you see what you really need to store what you need to keep track of all right uh I guess another nice thing about this perspective is the running time is totally obvious right this is clearly constant time so this is clearly linear time whereas in this memorized algorithm you have to think about when's it going to be memoized when's it's not I still like this perspective because with this rule just multiply number of sub Problems by time per sub problem you get the answer but it's a little less obvious than uh then code like this so you know choose however you like to think about it all right we move on to shortest paths so I'm again as usual thinking about single Source shortest paths so we want to compute the shortest pathway from s to V for all V okay I'd like to write this initially as a naive recursive algorithm which I can then memoize which I can then bottom up ay I just made that up so how could I write this as a naive recursive algorithm it's not so obvious okay but uh first I'm going to tell you how just as a you know an oracle tells you here's what you should do but then we're going to think about go back step back well actually I mean it's up to you we could either I could tell you the answer and then we can figure out how we got there or we could just figure out the answer uh preferences figure it out all right good no divine inspiration allowed so let me give you a tool the tool is guessing Okay this may sound silly but it's a very powerful tool okay the general idea is suppose you don't know something but you'd like to know it so I'd like you know what's the answer to this question I don't know man I really want a cushion how am I going to answer the question guess okay it's a tried and tested method for solving any problem okay and I'm belaboring the point here uh the algorithmic concept is don't just try any guess try them all okay also pretty simple I said dynamic programming was simple okay try all guesses this is Central to the dynamic programming I know it sounds obvious but if I want to fix my equation [Applause] here dynamic programming is roughly recursion plus memorization this should really be plus guessing memorization which is obvious guessing which is obvious are the central Concepts to dynamic programming trying to make it sound easy because usually people have trouble with dynamic programming it is easy try all the guesses that's something a computer can do great this is the Brute Force part okay but we're going to do it carefully not not that carefully I mean we're just trying all the guesses take the best one that's kind of important that we can choose one to be called best that's why dynamic programming is good for optimization problems you want to maximize something minimize something you try them all and then you can forget about all of them and just reduce it down to one thing which is the best one or a best one okay so now I want you to try to apply this principle to shortest paths now I'm going to draw a picture which may help have the source s we have some vertex V we'd like to find the shortest a shortest path from s to V and suppose I I want to know what this shortest path is suppose this was it you have an idea already yeah you can do is you could look at can go from s and say okay the of each of those good so I can look at all the places I could go from s and then look at the shortest paths from there to V so we could call this I don't know S Prime so here's the idea there's some hypothetical shortest path I don't know where it goes first so I will guess where it goes first right I know it it the first Edge must be one of the outgoing edges from S I don't know which one try them all very simple idea then from each of those if somehow I could compute the shortest path from there to V just do that and take the best choice for what that first Edge was so this would be the guess first Edge approach it's a very good idea not quite the one I wanted uh because fortunately that changes s and so this would work it would just be slightly less efficient if I'm solving single Source shortest paths so I'm going to tweak that idea slightly by guessing the last Edge instead of the first Edge they're really equivalent okay if I was doing this I'd essentially be solving uh single Target shortest paths which we talked about before so I'm going to draw the same picture I want to get to V I'm going to guess the last Edge call it UV I know it's one of the incoming edges to V unless s equals V then there's a special case as long as this path has length at least one there's some last Edge what is it I don't know guess guess all the possible incoming edges to V and then recursively compute the shortest path from s to U and then add on the edge V okay so what is this shortest path it's Delta of s comma U which is looks the same it's another sub problem that I want to solve there's V sub problems here I care about so that's good I take that I add on the weight of the edge UV and that should hopefully give me Delta of s comma V well if I was lucky and I guessed the right choice of you now in reality I'm not lucky so I have to minimize over all edges UV so this is the we're minimizing over the choice of u v is already given here so I take the minimum over all edges of the shortest path from s to U plus the weight of the edge UV that should give me the shortest path because this gave me the shortest path from sdu then I add it on the edge I need to get there and wherever the shortest path is it has some it uses some last Edge UV there's got to be some choice of you that's the right one that's the that's the good guess that we're hoping for we don't know what the good guess is so we just try them all but whatever it is this will be the weight of that path it's going to take the best path from s to because sub paths the shortest paths are shortest paths optimal substructure so this part will be Delta of su this part is obviously W VV so this will give the right answer hopefully okay uh it's certainly gonna I mean this is the analog of the naive recursive algorithm for Fibonacci so it's not going to be efficient if I I mean this is an algorithm right you could say you this is a recursive call we going to treat this as recursive call instead of just a definition then this is a recursive algorithm how good or bad is this recursive algorithm terrible very good very bad I should say uh it's definitely going to be exponential without memorization so but we know we know how to make algorithms better we memorize okay so you can I think you know how to write this as a memorized algorithm to define the function Delta of SV you first check is s comma V in the memo table if so return that value otherwise do this computation where this is a recursive call and then store it in the memo table okay I don't think I need to write that down it's just like the memoize code over there just there's now two arguments instead of one in fact s isn't changing so I only need to store with v instead of s comma V uh is that a good algorithm I claim memoization makes everything faster is that a fast algorithm not so obvious I guess yes how many people think yes it's a good algorithm better better definitely better can't be worse how many people think it's a bad algorithm still okay so three for yes zero for no how many people aren't sure yeah including the yes votes good all right it's not so tricky let me draw you a graph something like that so if we wanted to compute Delta of s comma V let me give these guys names A and B so we compute Delta of s comma V to compute that we need to know uh Delta of s comma a and Delta of s comma B right those are the two ways or sorry actually we just need one only one incoming Edge to V so it's Delta of s comma a uh sorry I I should have put a b base case here too uh Delta of s comma s equals z let's say Okay Delta of s comma a plus the edge okay there is some shortest path to a to compute the shortest path to a we look at all the incoming edges to a there's only one so Delta of s comma B now I want to compute the shortest paths from B uh well there's two ways to get to B one of them is Delta of s comma B sorry uh s comma s came from s the other way is Delta of s comma V do you see a problem yeah Delta of s comma V is what we were trying to figure out now you might say oh it's okay because we're going to memorize our answer to Delta s comma V and then we can reuse it here except we haven't finished Computing Delta of s comma V we can only put it in the memo table once we're done so at this when this call happens the memo table has not been set and we're going to do the same thing over and over and over again this is an infinite algorithm oops not so hot so it's going to be infinite time on graphs with Cycles okay for dags for aycllc graphs it actually runs in V plus e time this is the good case in this situation we can use this uh formula the time is equal to the number of sub problems times the time per problem uh so I guess we have to think about that a little bit where's my code here's my code um number of sub problems is V there's V different sub problems that I'm using here I'm always reusing sub problems of the form Delta s comma something the something could be any of the V vertices uh how much time do I spend per sub problem this a little tricky it's a number of incoming edges to V so uh time for sub problem Delta of SV is the in degree of V the number of incoming edges to V so I this depends on V so I can't just take a straightforward product here what this is really saying is you should sum up over all sub problems of the time per sub problem so total time is the sum overall V and V of the in degree of v and we know this is number of edges okay it's really so in degree plus one and + one so this is e plus v okay handshaking Lemma again okay uh now we already knew an algorithm for shortest paths in dags and it ran in V plus e time so there another way to do the same thing if you think about it long enough this algorithm memorized is entally doing a depth first search to do a topological sort to run one round of belman Ford so we had topological sort plus one round of bellman Ford this is kind of it all rolled into one this should look kind of like the Bellman Ford relaxation step or shortest paths relaxation step it is this Min is really doing the same thing so it's really the same algorithm but we come at it from a different perspective okay but I claim I can use the same approach to solve shortest paths in general graphs even when they have Cycles how am I going to do that dags seem fine oh what was the lesson learned here lesson learned is that sub problem dependencies should be a cyclic otherwise we get an infinite algorithm for memorization to work this is what you need it's all you need okay we've almost seen this already because I said that to do a bottomup algorithm you do a topological sort of the sub problem dependency dag I already said it should be ayylic okay we just forgot I didn't tell you yet so for that to work it better be a cyclic for DP to work for memorization to work it better be a cyclic if you're a cyclic then uh this is the running time so that's all General okay so somehow I need to take a cyclic graph and make it a cyclic you've actually done this already in recitation so if I have a graph let's let's take a very simple cyclic graph okay one thing I can do is explode it into multiple layers we did this on quiz two in various forms okay it's like the only cool thing you can do with shortest paths I feel like uh you want to make a shortest path problem harder require that you reduce your graph to K copies of the graph all right I'm going to do it in a particular way here which I think you've seen in recitation which is to think of this axis as time or however you want and make all of the edges go from each layer to the next layer okay this should be a familiar technique so the idea is every time I follow an edge I go down to the next layer this makes any graph a cyclic done what in the world is does this mean what is it doing what does it mean double rainbow all right so I don't know how I've gone so long in the semester without referring to double rainbow used to be my favorite uh all right so um here's what it means Delta subk of SV I'm going to Define this first this is a new kind of sub problem which is what is the shortest it's the weight of the shortest STV path that uses at most K edges so I want it to be shortest in terms of total weight but I also want it to use few edges total so this is going to be zero some sense if you look at so here's here's s and uh I'm always going to make S this and then this is going to be V in the zero situation this is going to be V in the one situation V so if I look at this V I look at the sh path from SV that is Delta sub Zer of SV so maybe I'll call this v subz v sub one P sub 2 okay shortest path from here to here is the there's no way to get there in zero edges shortest path from here to here that's is the best way to get there with at most one Edge shortest path from here to here well if I add some vertical edges too I guess cheating a little bit then this is the best way to get from s to V using at most two edges and then you get a recurrence which is the Min over all last edges so I'm just copying that recurrence but realizing that the the s2u part uses one fewer Edge and then I use the edge UV okay that's our new recurrence by adding this K parameter I've made this recurrence on sub problems a cyclic unfortunately I've increased the number of sub problems number of sub problems now is uh v^2 technically V * vus one because I really actually v^ s sorry get it right I start at zero and what I care about the my goal is Delta subv minus one of SV because by belman Ford analysis I know that I only care about simple paths has of length at most V minus one I'm assuming here no negative weight Cycles should have said that earlier if you assume that then this is what I care about so K ranges from 0 to V minus one so there are V choices for K there are V choices for V so the number of sub problems is v^2 how much time do I spend per sub problem well same as before the IND degree where did I write it up here the IND degree of that problem so what I'm really doing is summing over all V of the ingree and then I multiply it by V so the running time total running time is V sound familiar this is Bellman Ford's algorithm again and this is actually where bellman's Ford belman Ford algorithm came from as this view on dynamic programming so we've seen yet another way to do bman Ford it may seem familiar but in the next three lectures we're going to see a whole bunch of problems that can succumb to the same approach and super cool