Okay? Welcome back. This is Discrete Optimization, second part of the Knapsack Problem. Okay. So what I'm going to do here on this lecture is switch hats. I'm going to take my made in the USA hat, okay, and change it and take another hat, you know, made in China, okay? And the reason that I'm doing this is because we are looking at the completely different way of solving exactly the same problem. And that's what is beautiful in optimization, right? So you have many hats, okay? And every one of them is a bag of tools that you can use for tackling a problem, okay? So today, I just changed my bag of tool, okay? I'm picking up another tool, okay? And that, and I'm going to show you how to use that tool for actually solving the Knapsack, okay? One of the thing I won't tell you is which of the two hats is better, okay? So you will have to figure out, you know, from yourself. But we're going to introduce a technique which is widely useful. Okay? Which is called branch and bound. Okay? Once again, we take the one-dimensional knapsack, okay? That we solved with dynamic programming. My other hat the other, the other day, okay? And so, what we see is we have the, the value of all the items. You have the weight of old d items which is to be smaller than k, and then we have to decide whether we have taken item or not. 0 and 1 over it right? Okay, so that's the one-dimensional knapsack okay? Now, if I want to introduce. Branch and bones. What I have to do is start with exhaustive search, right? And that's the first thing I told you last time, right? So you can always decide, you know, I take, I take the item or I don't. OK? So, so in a sense, you know, once I have, once I make this, once I do that, I have two nodes. And I can do the same thing for the next level. I can decide where I take item to or I don't take the item to. And then I can continue and this part, this particular case. If I have three item, I get eight configuration. If I, if I get four, I'm going to get 16, 32, 64 and so on. And it's going to grow exponentially. And of course what you don't want to do is explore all of these configuration, okay? So what we did with dynamic programming is trying to compute this thing bottom up. Avoiding to compute several times the same thing. What we're going to see here is branch and bound where you iterate two steps. A branching step, which is really trying to explore this, this tree, all right. So, like basically, we are going to explore the entire tree. But, we are going to use this technique, which is bounding. And bounding is going to tell you, wow, I look at this node. And I know for sure that I'm not going to ever be able to find and optimal solution below that node. So, I don't have to expand it. So you prune large part of this tree. And that's what branch and bound is all about. Okay? So looking at a tree, and starting, removing large pieces of it. Okay? Using what is called bounding. Okay? So, branching, I told you what it was, very easy, it's like an exhaustive search, sort of want to enumerate everything. Okay? And bounding is about finding an optimistic evaluation of a particular node. Okay? So optimization is for optimistic people. If you are not optimistic you can already give up, okay? So because we are trying to solve really hard problem, if you don't think you can do it, just no hope. So you have to be optimistic. And bounding is just that, okay? So you are looking at the node and how best can you do that node. Okay? So, if you maximize, you will have an upper bound. That's the top, you know, best that I could do for this knapsack. You know, if I, you know, if, the best that I can hope for. And if it's a minimization, it's going to be a lower bound. And how low can I go. Okay? So in a sense, what you want to do is find an optimistic evaluation, you know the best you can do, but of course you want the optimistic evaluation to be as middle of the optimistic as possible, right? So you want to make it as tight, as close to the optimal value as possible, okay, so we want to try to evaluate how good this node can be. As precisely as possible. Okay? That's bounding. Okay? So, how do we find this bounding technique? How, how do we bound? Okay? And so the basic idea here is that, what we're going to do is take the problem and relax it. Okay? So they also are going to think that you have always to remember is that. In a sense, optimization is the art of relaxation. OK? So, you know, optimization people, are both optimistic, and they know how to relax. OK? So we are very nice people, right? So, that's what we have to do. So, so now what we have to discuss is how we do relaxation in practice. OK? So you look at this model again, okay? That's a knapsack problem. And you are wondering you know, how can I relax this problem in such a way that the optimal solution is going to be better, well, the optimal solution of the relaxation is going to be better than the true optimal solution. How do I do that? Okay? Now you can look at everyting here and decide what to relax, one of the easy things to relax is [UNKNOWN]. While you can say, oh I'm going to relax the capacity constraint. So if I'm going to remove that constraints. I make the problems, you know, I make, it's a relaxation. Why is it a relaxation, because essentially I'm increasing the set of solutions to the problem. Now all these invisible solutions that I have to, that I eliminated before. Know they are solution. So these invisible configuration can somehow Are becoming feasible, so I have more of them. So the value of the objective value, the optimal solution of the optimal value is going to be higher. Okay. So that's an optimistic relaxation of this problem. Okay? So what happens when you do that? Okay. So, this is a simple example that I've shown you. You see on the table here You know, you see all the values of the items, and then you see the weight of every one of them, okay? So the first item, for instance, has a value of 45 and a weight of 5, okay? 48 for the next one, and 8 as the weight, okay? So what we want to do is essentially solve th-, this, this problem with these three items and a capacity of 10, okay? So, you basically start at the beginning. What is the best, best, best you could ever do, okay? Well, we have relaxed the capacity constraint, so we can take all the items, okay? So, in this particular case, if you sum all the items. What you get, the va-, the sum of the values of all the items, you get 128. And that's what you see are these root node here, okay? So what you see are the root node, okay? Is basically the value 128. You see that the capacity of the knapsack is 10. And we are at the root node; we haven't actually accumulated anything. So the value is still 0, okay? So what are we going to do? Is we're going to take that node and branch, okay? So when we branch, and that's what you see there, okay, so I basically branch on the first item, okay, and that branch gives me a value of 45. Okay. Now, it also take a weight of 5, so now the capacity of the knapsack is 5, okay. And my, you know, optimistic evaluation is still 128 because now, if I can still take all the items, okay >> So single variation, okay. I keep going. I decide to take item two, okay. If I take item two [NOISE] boom okay. I'm exceeding, I'm exceeding the capacity of the nap sac, five plus eight, 13. Okay, so this is an infeasible configureation. OK? Not a solution, and so I can reject it. OK? So I know that I cannot take item two if I have taken item one, OK? So the new evaluation is the sum of all the items that I can take, and in this particular case there is only one left, which is three And so the best thing that I could ever do, if I don't take item two, is 80. OK? And then I decide to take item three. That's what you see here. OK? And I get that value of 80. Yes! I have a solution now. OK? And its value is 80. If I don't take the item, I also get a solution. Its value is 45, so I can prune it away, okay? These nice little symbols here means, and we'll see that again and again, is basically mean I can prune that node, I don't need to look at that node. Is, is dominated by the best solutions I have found so far. Okay. Now you go back and there's the thing that you could do there is essentially not selecting item 1. What you get tehre is an optimastic evaluation which is 83. Okay. So if select the 2 other items I can get 83. Now 83 is still better than 80 the solution that I have. Okay so i have to continue. Okay. So I'm continuing. Okay. So, I take item 2 okay, so I get still an evaluation of 83. And then, if I take item 3, [SOUND] I exceed the capacity of the knapsack. So, that's not a feasible, that's not a solution. That's not a feasible configuration. And if I don't take it, I get a value of 48. that's not good. It's dominated by the best solution I found. I go back here, okay, I decide not to take item 2. And so, now all best value is 1 is 35. Okay, 35 is worse, is way worse. That what we have found there, I don't need to explore below, below that node. That's bounding, right. I know that, even if I take all the possible items that I have left, this will, this, this note will never be expanded to something which is better than 35 and before I can prune it away. I don't need to explore the choices. For the third item, below that node, okay? So that, you know, that explains to you, the bonding process in a very very naieve and simple relaxation, okay? So, now we look at this thing and we say, okay, so can I actually do something which is not completely stupid, can I actually get a relaxation here? Which is giving me some interesting, some interesting optimistic evaluation. How can I relax this in an interesting fashion? Okay? Now, think, you know, if instead of having these beautiful artifacts that, you know, Indiana Jones has to collect, what we are trying to do is to collect something which is almost as valuable, Belgian chocolate, right? So we are basically at all these bars of Belgian chocolate we will take as many as we can. But then, essentially, what can happen is that, okay, you fill your knapsack and then there is this nice bar that doesn't fit into the knapsack. What you can always do at that point is take the bar and break it, right. And you take a piece of it, okay. So, in a sense, one of the things that you can do in this particular case. If, if you are dealing with this beautiful and tasty Belgian chocolate. Is cut one item in parts, okay? You can do that, so that you can actually fit it inside the knapsack. So, if you do that essentially what you're doing is relaxing. The decision, the values of the decision variable can take. Remember before, the value that xi could take is 0 and 1, I take it or I don't, well I don't take it or I take it, okay? And so what I'm doing here is saying, oh but no, I can take an item or I cannot take an item or I can take any part of that item, any fraction of the item. And that's really what I do here, okay? So this is an interesting relaxation. We'll come back to it many, many times in this, in this class. And it's called the linear relaxation. Okay? So what we are basically doing is taking the, you know, the, the variable, the decision variable, which takes. We just have to take an integral value. And relaxing that so that they can take this continuous values inside the, their, their branches, okay? So we are basically saying that, no, we can take fractional parts of these, you know, items are in the values that these variables can take. Okay, so when you do that, okay, so you look at this knapsack and how do you, can you actually compute this in your relaxation? How do you actually find the solution, okay? Because now, we have to take, you know, we have to take all these fractional numbers, there are many, many, many more positive entities, right? So how are we going to do that? Well, there is a very simple way very, very simple way to do that in this particular case. And the key idea is that you look at all the items, and you rank them by this ratio, W-, Vi over Wi. So, remember, Vi is the value of the item, Wi is the weight. So in a sense, when you rank, so this ratio is giving you the most value per, you know, kilo or pot, okay? It's giving you the value, the most value of the, the items, that are, is ranking by the items that are the most valuable In, per unit, in a sense, okay? And so, you rank them in that particular way, and the key idea is that once you have ranked them in that particular way, you take them, you know. You take the most valuable item, and then the next valuable item, and then the subsequent valuable item. And at some point, you're going to get to the capacity of the knapsack. You can't fill the knapsack with the next item. And at that point, what you do is, you take a fractional part of that last item that you can't really fit into it, okay? And that's going to give you the optimal solution to this thing, okay? So if you look at this problem here, okay? So, these are the various ratios that you see there. Right? So, the first one is ni-, a value of 9. The second one 6. The, the third one is roughly 11.7. So, this is the, the third item is the most valuable. Okay? And that's obvious, right? So you see 35 divided by 3. That's obviously greater than, you know 6, which is 48 divided by 8. Okay? So, in a sense, so now we have rang/ these guys, so we're going to select the third item first. Okay? And the total items take only a weight of 3, so you still have room in the knapsack. Okay? And you got already a value of 35. Then you take the value 1, you take the item 1, its value, 45; so that gives you already 80, okay? And, you know that, that fits in the knapsack. Your knapsack at that point is, is you have already taken 8, 8 of the capaci-, you know, 8, you know, 8 units of the, of the capacity of the knapsack. Now, you look at the last item. The last i-, item would take 8, you know, of capacity in the knapsack. You can't fit it in, okay? You have only 2, you know, left. And so what you're going to do is take only a quarter of that item and of course, you get a quarter of that value. Okay, which is in this particular case, 12, and you get an optimistic evaluation node, which is 92. Now, look at what we have done. Before if you take all the items, we have something like what, you know 128. Right? Now, we have an optimistic evaluation here, which is 92. That, we will never reach that. This, well, maybe we can. But, but this is the best we could ever do, if we actually select an, you know, items in their entirety. But this is an optimistic evaluation of the best you can do. Okay? So, so, so, but, it's much tighter than the one that we did before and that's good. So, you want this bond to be optimistic but as tight as possible. Right? And that's what we just did in this particular case. Okay? Now, you know, this is, this is the intuition why it is correct. You can play this valuable transformation. Where, you know, Yy is essentially Vi times Xi. When you do that, essentially, the objective function becomes beautiful. It's just a sum of all the decision variables. Inside, all the items have the same values. Okay. What has changed is the weight of these item now, okay. So the one which was, which was the most valuable now essentially is the s-, is the one that takes the fewest space inside the item. So what you're going to do is basically fill the item one by one, the one that take the least space inside the knapsack, until you get to the last one because they have all the same value. And until you get the last one which has a, you know, which maybe takes more space and you can, you will take only a fractional part of it. This is why, this is how you can justify correctness. Of the linear relaxation that I have shown you before. Okay? So, now, of course, in practice, you know, you probably don't want to take this, you know, stone, or Afghan emblem mask and break it into part, right? So we have to take entire pieces. We can't break them into parts. So all we're going to use that relaxation, well, we are going to use it as a bonding technique, right? So now we are at the top of the node. We know that the best we could ever do is 92, okay? We haven't selected anything, we still have the full capacity of the knapsack available, okay? Now, if you select item 1, obviously every time you select something, okay, so in this particular case, you know, since it was taken into relaxation, you will have the same relaxation over, optimistic value here. So you still get 92. Now, you have already accumulated the value of 45 and the capacity of the knapsack is 5, okay? You keep going, you know if you take item 2, boom, okay, you exceed the capacity of the knapsack, this is unfeasible so you don't consider that guy. You go on the right. You don't take item 2. What is interesting here is that now, the evaluation is different. It's 80, okay? It's taking basically item 1 and item 3. And you get a feasible solution when you take it. You also get a feasible solution if you don't take it, but that solution is dominated so you can create a way. Okay? So what is interesting, though, is what happens if you don't take item 1. Okay? If you don't take item 1, look. What we have is the value 77, right? That value is already lower than the best solution you found, which is 80. Which basically means what? Whatever you will do here below that tree, right, below that node, okay, whatever you will do Is not going to get you better, better solution than 77. You know that 77 is already lower than this one so you can throw the entire tree there, completely away. You can prune that, you know, away. Okay? So which is great. Right? So know the 3 is much smaller than before, you are cutting this exponential growth. Okay. Now how did we get, you know, 77. Well, what you know is that you can take item 1. Right? So you know that the most valuable item is item 3 so you take also 3 units of capacity for knapsack, okay, and you accumulate 35. So which means what, that you have 7 units left in the knapsack, okay? So this guy is taking eight and you can't take all of them okay, so you are going to do what, you going to take seven of these 8 units. So if you do 7 8ths of 48, what you get is, is what? Okay. Is 42, 42 plus 35 is 77. That's the value that you see there. Okay, so but the key point here is that this is an optimistic evaluation using the linear relaxation. That's what I just told you about and because this value is really tight in this particular case, we can prune it, we can prune the entire subtree which is below that node in one step, at that level. Okay? And therefore the tree is much smaller. Okay? We prune that guy; we are done, okay? So, essentially, this is an introduction to depth-first branch and bound. That's the strategy that we have done, okay? So we start exploring the node, and then the node, you know? Th-, we start exploring from left to, you know, from left to right. And every time we branch, we have two nodes, we take the left one and we do that. When you go back we explore the right one. That's the depth-first search. There is another strategy which is best-first. And that strategy is essentially always taking the node which has the best evaluation. Okay? So we always say, oh, that node is a good evaluation. I want to take it because that's probably where the best solutions are. Okay? I'm going to discuss that in a moment. Okay? But there are many others and we'll come back to that later in the class. Okay, so that first, the key ID in that first is that you prune a node every time the evaluation is worse than the one you, that, that you have found so far, okay. It's very memory efficient. The only thing that you have to remember is essentially a path, okay. best-first search is always selecting the node with the best estimation, okay? And the key ID, what, the reason why you want to use a technique like that is because you want to dive in very quickly. To the best possible solution. Okay? So let me illustrate this by using the stupid illustration that we have seen before. Because it's easier to actually do the illustration here, okay? So I restart from the root. Okay? And I have the value of 28 nothing in the knapsack. No value accumulated. Okay? So essentially what I do is that in the particular case, you know, I branch, I generate these two nodes and I take the one that has the largest value. Right? so I take the one which has the value 128, okay? So that's when I take the item, right? And now, what I do is, I expand that node, and I get 2 nodes, okay? So, I get one which is infeasible, so I throw it away, and I get another one, which has value 80. At that point, I look at all the nodes that I have, okay? I have 2 left. Which are feasible, which are potentially feasible, they are still feasible at this point. And I think the one which is the best evaluation of function which is this guy, right. So I expand that guy and I get two nodes, I get another node with 83 and I get a node with 35. So I take the node which has the highest value which is once again 83. I expand 83 and i get a value of 48. Okay, and any, and any feasible solution, okay? So I get 48 over there. That's the best, so that's the solution now, okay, and I can throw all the nodes which are in the tree which are worse than that. So I will throw away this particular node here, okay. Now that node here as evaluated, it's still greater than the best solution that I found, so I keep expanding it, okay. Whoops, sorry, so I forgot one of the things that I wanted to do. Okay, so I get these two, I get these two nodes, I get these two nodes there. Okay? And one of them is the value 80. The other one is the value 45, okay? So the value 45, I can throw away again. Okay? And the value 80 now becomes the best form solution. Okay? So, essentially, best-first is always going to look at the nodes and take the one which has the best evaluation function. When does it prune? It prunes when you have a physical solution, okay? When you find a physical solution, then all the nodes which are, you know, that are worse evaluations are going to be thrown away. And, the question that I have for you is that, is it memory efficient? Okay? And, I'm give you, I'm giving you a trick here. In most of these optimization problems, and most of the questions I'm going to ask you. The answers can always be found by exaggerating. Okay? So exaggerate the situation. If you exaggerate the situation, you will find out whether this thing, you know, whether the answer is, you know, whether the, well, what the answer is to the question. Okay so how do we exaggerate in this particular context. The way to exaggerate there is to assume that you have a terrible optimistic evaluation, you always return minus, plus infinity, you always believe that you can win the lottery, okay, so if you do that how many nodes are you going to expire, okay, so you are basically going to explore the entire tree because it's only when you find a solution at the end that it's going to tell you oh, this is the value... But all the other nodes are still, I can do better, I can do better. So you gotta explore the entire thing. So, what this is basically telling you is that if, you know, there are many nodes which are very close to the optimal solution, okay? So you may explore a very large portion of the tree. Okay? So it's, so the memory is an issue when you do a best, you know, best-first branch and bound. You have to make sure that you are not going to explore too many nodes. Okay? So so we have seen two techniques now dynamic programming and branch and bound. So one of the things you have to think about is which one is better than the other one, okay? One of the things that you may think about is that is this a good question, okay, is this even a valid question so think about it, okay? And then the other thing, and this is a valid question for sure. Can you actually combine these two techniques together? Can you actually have an algorithm which is the same, which is at the same time exploring some features of dynamic programming? And some features of Branch and Bound, okay? So you need to think about, what is really the essence of dynamic programming, okay? What is really the essence of Branch and Bound? Can you actually combine the two of them? Okay? Good. So this is essentially, you know, the end of the ramping up phase of the class. OK? So we have seen a very, very simple problem. Knapsack. Okay? Two basic techniques, you know, dynamic programming and branch and bound. We're going to look, now, in the next, the rest of the class, at various techniques like constrained programming, local search, mathematical programming. And, and try to look at more complex problems of course, okay? See you next time.