Okay so, good morning again Discrete Optimization, LDS and LNS. So what you're going to see in this lecture are two really amazing things, okay? They're really useful in practice. They are also truly beautiful great ideas, okay? So one is, you know, a strategy for exploring a tree. The other one is an hybrid optimization technique, okay? So, it's a very short lecture but the two ideas that you have there, you will be able to reuse them all, all over the place, okay? So, so essentially what I'm going to talk about today is searching, okay? And the first thing is going to be limited discrepancy search, LDS, okay? Which is, basically giving you another way of searching entry. So, we have seen that first search and best for search. That's mostly what we've been talking about. This is a very different way of looking at exploring the tree. The order in which you are going to be looking at these leaves is going to be very different. And it's going to be based on a very clever idea, okay? And then the second phase that we are going to talk about is called large neighborhood search, okay? It's a hybridization of search but it can also be an hybridization of constraint programming and local search. And it's very effective for solving very large scale problems, okay? So lets first talk about limited discrepancy search. And the key idea the key assumption here is that you assuming that you have a good heuristic for your problem, okay? So your solving that typically give you a good solution, okay? Not the optimal maybe not even close to the optimal but typically this is a heuristic that makes very few mistakes, okay? And then we are going to assume that the search tree is binary but this is not a limitation. We could generalize that. But typically many of the problems are going to be binary. If you look at scheduling it's going to be which task is scheduled first on that machinery or which is not scheduled first on that machine, okay? and then what we are going to assume is that, inside the tree, okay? When you follow the heuristic you take a left branch, okay, that's typically how all the trees built, okay? So, following the heuristic means branching left and then a mistake is when you branch right, okay? Because essentially the algorithm is going to branch left, okay? But if make a mistake, you're going to reconsider a choice and go right, okay? So, basically, you know when you go left the heuristic is right, when you go right the heuristic is wrong, okay? So the key idea behind behind Limited Discrepancy Search is that and this very useful for variety of scheduling problems. Is that what you want to do is that you want to use this heuristic to guide you in searching the trees, these tree, okay? Not only in choosing you know the first leaf, and way to go. But also in exploring the entire tree. So, you're not going to follow the heuristic and use that for a search. You're going to follow the heuristic and use, and use it for actually deciding which part of the tree you are going to explore first. So, what you want to do is to explore the tree by looking at an increasing order of mistakes. So, you're going to assume that, you're going to, essentially what you do is you trust the heuristic, you know, less and less over time, okay? So initially you want to trust the heuristic and then you want to trust it, you see, but not completely. And then you want to trust it a little bit less and so on and so forth, okay? So how do, how, what does that mean? That basically means that you're going to explore this tree in a set of waves, okay? And the first wave is basically no mistake. And then the next wave is going to be one mistake. And then the third wave is going to be two mistakes and so on and so forth, okay? But once again you know the our understanding this is much easier when you see a picture. I'm going to disappear, because I want you to see the tree right? So initially you're going to trust the heuristic that basically means that you are assuming that you make no mistake and this is what you see. No mistake, you always branch left okay and then you get to this. That basically means the heuristic was always right. That could be the optimal solution. But obviously the heuristic makes some mistakes. So the next wave, we are going assume that the heuristic is really really good, except that it made one mistake but we don't know which one, right? And that's the next wave, okay we're going to make one mistake but we don't know where, okay? So that's basically means that instead of going left all the time you are going to go always left, except once. Okay, we go right and these, these are the leaves that we are going to explore, okay? So, once again, you can say oh, I make a mistake at the beginning and then I have to go left, okay? Or you go left, you make a mistake afterwards and then you have to go left. Okay, you have to go left, because you can make only one mistake [UNKNOWN] was right, the heuristic was right twice and then I make a mistake, okay? So in a sense what you see now is that you have explored this tree leaves in wave two, okay? And one of the big things you see is that this leaf here is not at all what you have with that first search. It's a very different order right? So that for search, you would have basically explored this part basically, okay? Now, now this is a tiny tree, right? So imagine that this is a tree which is you know, twenty depth, you know, high, okay? So essentially you would have, you know, explored some of the leaves over there but you would have also explored one of the leaf on the right part of the tree, right? So this is a very, very different order than what depth-first search would do, okay? So that's the second wave. The third wave which is actually a book Alvin Toflow, but you don't care, right? so the third wave here is explored in 2 mistakes, okay? And here I consider color but anyway. So what you going to see here is that. Okay, so I'm assuming that I don't make a mistake at the point and then you have to make two mistakes. And that's one of the leaf that you will see with making two mistakes. Or you make one mistake, okay? Then you go left and then you make another mistake, that's another leaf. Or you make a mistake, you make a mistake, and then you have to be, you have to be right, okay? So this is essentially what happens in the third wave. Okay the third wave explore in the as all the leaves that the heuristic can arrange by making two mistakes, okay? And then you have four, you know the last wave, you know which we going to explore in this particular case. This case but of course when the tree is much larger you explore many many more of these waves, okay? So that's the basic idea behind limited discrepancy search. One again there are variations around them. So one of the things that typically it's interesting in practice is the fact that you know we have you know, less information at the root. No, so of the mistake you want them probably, you know at the top. And so you can design algorithm that are taking that into account. One of my colleagues told me once did actually something like that. So they have variation around that heuristic. But the key point is always based, in this particular case, on a good heuristic that you are trying to follow, okay? So, so that's the first thing that I wanted to talk about which is how you explore, you know, a tree using a good heuristic. The second thing that I want to talk about is this clever idea of doing large neighborhood search, okay? And it's a combination of constraint programming and local search and this is, this is basically a four step process, okay? You start with a feasible solution, and we know that CP is very good for finding feasible solution, okay? And so that's a CP stat. Then you do, you select a neighborhood and that's typically a local search stop, okay? So you want to select a neighborhood and I will talk about that in a moment, okay? And then what you do is you optimize that neighborhood with CP. That means that you select the best neighbor or you know, a very good neighbor with constraint programming in that neighborhood. Okay now you have a better solution, okay? And you repeat the process from step two, okay? Which basically means you select another neighborhood, okay? And you re-optimize again using CP, okay? Now what is the neighborhood? Okay, so, well there are many ways to do that. But, you know, the, the basic intuition here that you need to remember is that all you're going to do is you're going to fix a subset of the variables, okay? That set of variables is fixed, no, you can't change it, okay? But the other one's, you know, you kind of you know free them, and you can re, you know, find new values for these variables. So you fix part of the solution, you let the rest of the solution be free and you reoptimize that part. Okay, now which subset you know that's going to be problem specific, you can exploit the structure of the problem. You can exploit a lot of the information that you have in the in, in, in the constraints at a particular point. And so you know different, different problems can have a different neighborhood that you select there. And it's normal right? Ao that's the same thing as local search, okay? Depending on the problem, you have a different neighborhood, but that's what your going to try to do. Okay in executing problems you can look at the machine for selecting which variables you select. And in a routing problems your going to look at the vehicles and thing like this okay? So, now why do we do LNS? Well there big reason is that constraint program is very good at finding feasible solution and very good at finding small combinatorial space. What is difficult is you scale, okay? When you have a lot of variable and the local step here is giving you that scalability. You solve the global problem by solving a lot of smaller problems of exactly the same type, okay? So that's what you are doing here in local, in, in large neighborhood search. No we'd obviously generalize the MIP directly, you change CP there, okay? With MIP and you also have a large neighborhood algorithm. The only thing that's different is the MIP solver is going to be used for finding a feasible solution. Is going to be reused for re-optimizing the, this, this, the neighborhood, okay? Once again, the variation around this algorithm and so they're a very nice technique. You know In some of the mid community things like the feasibility [UNKNOWN] resemble this as well, okay? But, this is, this is what I want to convey here is once again the intuition on how you can [UNKNOWN] some of the techniques that we have seen in this class, okay? So, let me give you an example which, which I really like. Okay, for ex, you know express, you know explaining this. And this is called an asymmetric you know traveling salesman problem with time window, okay? And this is all you can think about the problem, you have a set of locations that you have to visit, like CSP. But you have a certain amount of time at these locations, that's the service time, okay? And then you have a time window in which you have to visit. You can only visit this particular point in time, okay? And the then the distance between these locations is asymmetric. Okay, going form A to B but shorter than from B to A, okay? And that complicates the matter, okay? So, what you are trying to do is to find a Hamiltonian path which visits all these location, okay? So that means a path that visit everyone of these location exactly once. You have to visit the location such that you satisfy their time window, okay? And then you have to make sure that what you want to do is minimize the total you know distance that you are traveling in this Hamiltonian path, okay? So that what you're trying to. Now, so this is a very simple constraint programming model for doing that which is expressed a scheduling problem, okay? So you define essentially a number of activities for every one of the location, okay? And the duration of these activities on the service time, okay? So you have to research a vehicle which is going from one activity to another you know to every one of the activities. And also you know is there for and then what you have minimized is basically the transition time between these activities, okay? And you make sure that you satisfy the time window constraint, okay? So once again you know, I don't want you to understand this is in any detail. But the key point is look at the size of this problem right it's about what 15 lines of code, okay? And so what I'm going to show you is that is once you use large number root search on this algorithm, it becomes a very very effective technique for solving this problem, okay? So this is, essentially an illustration of what, you know, what, what this problem is all about, so every one of these lines is a location, okay? The blue thing inside this is the service time, okay? The, the, the wheat /g, you know, the wheat interval there is basically the time window for every one of them. What you don't see is where they are locating in space, and therefore, you have, you know, some, you have to move from one to the next and so on. So you don't see that in this particular display. You, you know, when the task are served in the temporal fashion. You don't see the spacial, you know, the spacial arrangements here, okay?. So, what is, what will the large neighborhood search do? The large neighborhood search is going to look for this and said oh, I have a feasible solution. Great, step one is done okay? Now I'm going to look a the neighborhood, okay? So, I'm going to take this box here, okay? Everything outside the box is going to be fixed I'm not going to, you know touch these tasks, okay? They are going to be scheduled exactly where I choose you know where the feasible solution is, is, is chosen to schedule them. But I want to do is re-optimize this path okay? So I'm assuming essentially everything outside the box is fixed and now I'm re-optimizing inside the box, okay? And so I re-optimizing this until you know I find an optimal solution or you know I am tired or optimizing this thing. And then I will take another box and you know do the same thing, okay? Or you could do something a little bit more interesting, okay? I can randomly choose the task that I am going to relax, okay? So here the task which are in pink, are going to be the one I am going to be relaxing. All the other ones are going to be fixed and no the undone have to be, success, you know successive in this, in this, in this travel. They can just be anywhere and no I'm re-optimizing that part as well, okay? So, so you have an algorithm here which is basically two neighborhood, okay? We choose either the one which is the sequence of task or we choose the random set of task that in fixed the rest, okay? So we re-optimized this and we keep doing this a while, okay? And so, what I want to show you is the kind of result that you get. They are all results you know like seven, six, seven years ago. But they are very interesting, okay? So what I, what we did at that point was basically looking at a very sophisticated branch and categories and for that particular for that particular problem. Which was done by the you know, the amazing team in Berlin. You know lead by you know Martin Grötschel and so they have this very sophisticated algorithm for finding a high quality solution. And sometimes proving optimality although this is very difficult, so only the very small problems only the very tiny problems you can prove optimality. And so this is the best solution that they had found in essentially you 5 hours of CPU time, okay? And what you see there is a size, this of the instances, okay? Which go from 40 to 233 in this particular case, okay? So this is what they intended is what we said is, okay? Lets took this very, very simple constraint programming problem. I will provide this with local search, get this LNS algorithm and see how fast we can read this bar because almost none of them were optimal, okay? And so this is what you can do, get essentially in five minutes okay? So the the LNS algorithm can improve all the box and it needed ten minutes for finding the other ones. So what I'm basically showing you is you know, the power of this method in a sense. Very simple algorithm when you have this hybridization, gives you, you know a very dramatic improvement over a very, very sophisticated algorithm which is not advertised, okay? But it was a very sophisticated algorithm, okay? So this is the power of, you know, large neighboring search. It has been applied to many, many problems, and it has been generalized in many ways. But this is a very nice hybridization technique for finding high-quality solutions very quickly. Now once you have this high-quality solution, you can turn to a neighbor or branch and gather a, you know, pure constraint programming system for, for, for proving optimality. But this is the kind of things that you know, large number of research will allow you to do, finding high quality solution very quickly. Okay, so this is the end. Thank you very much. hope you use these two techniques. They are very useful in practice.