Discrete Optimization aims to make good decisions when we have many possibilities to choose from. Its applications are ubiquitous throughout our society. Its applications range from solving Sudoku puzzles to arranging seating in a wedding banquet. The same technology can schedule planes and their crews, coordinate the production of steel, and organize the transportation of iron ore from the mines to the ports. Good decisions on the use of scarce or expensive resources such as staffing and material resources also allow corporations to improve their profit by millions of dollars. Similar problems also underpin much of our daily lives and are part of determining daily delivery routes for packages, making school timetables, and delivering power to our homes. Despite their fundamental importance, these problems are a nightmare to solve using traditional undergraduate computer science methods.
This course is intended for students who have completed Advanced Modelling for Discrete Optimization. In this course, you will extend your understanding of how to solve challenging discrete optimization problems by learning more about the solving technologies that are used to solve them, and how a high-level model (written in MiniZinc) is transformed into a form that is executable by these underlying solvers. By better understanding the actual solving technology, you will both improve your modeling capabilities, and be able to choose the most appropriate solving technology to use.
Watch the course promotional video here: https://www.youtube.com/watch?v=-EiRsK-Rm08

Avis

Filled StarFilled StarFilled StarFilled StarHalf Faded Star

4.9 (15 notes)

5 stars

14 ratings

4 stars

1 ratings

À partir de la leçon

Local Search

This module takes you into the exciting realm of local search methods, which allow for efficient exploration of some otherwise large and complex search space. You will learn the notion of states, moves and neighbourhoods, and how they are utilized in basic greedy search and steepest descent search in constrained search space. Learn various methods of escaping from and avoiding local minima, including restarts, simulated annealing, tabu lists and discrete Lagrange Multipliers. Last but not least, you will see how Large Neighbourhood Search treats finding the best neighbour in a large neighbourhood as a discrete optimization problem, which allows us to explore farther and search more efficiently.

Enseigné par

Prof. Jimmy Ho Man Lee

Professor

Prof. Peter James Stuckey

Professor

Transcription

After making it into the Princess' intestines, Zhuge Liang and Wu Kong needed to find the deepest point within to target and cause her enough pain to get to give up the fan. Zhuge Liang summoned the dragon of dimension's first step from the professor's. All right. So, San Wu Kong and Zhuge Liang have made their way into the intestine. And now, what they're trying to do is find the most painful spot in order to strike the Princess' iron fan and give her massive indigestion. I'm sorry, the most painful spot is at this global minimum in the intestine. All right, so, we Wu Kong gets there, he's going to strike and Princess' iron fan will be very distressed. All right. Again, she has a magical defenses, so there are some places which will set off alarms, and reject them straight away from the body. So they have to avoid those. So here is our pain point problem. We're inside the intestine, and we're searching for the most painful point. We've got this untouchable lines, as usual. The absolute value of x is not equal to absolute value of y, we're trying to find the global minimum in this convex and non-convex landscape. And here is the landscape of the intestine, and the interesting thing that's different from our previous examples, it's still non-convex and has lots of local minima but it also has plateaus. So if we look at the map from above, we can see the global minima is over here, but there's also these plateaus here. So here is a place where all of the values take the same value along that line. And that's going to make it harder to do our search. So how are we going to search in landscapes which have plateaus? Basically, we have to be careful, all right? We need to be able to change to move to a candidate which is no worse. So we need to change our descriptions or if we find that the new candidate is at least as good as where we are, then we shouldn't move. Otherwise, we'd be stuck. We need to maybe move along this plateau to find a new base to get off, all right? If we do that, that's sufficient to solve many problems, since there can be many solutions on the plateau. And that'll be enough. But as soon as we are allowed to move to a value of equal value, then we're going to have this possibility of cycling. And since we moved from d to e and I had the same value, of course, we can move back to the other one, back to d, still have the same value again. So, as soon as we allow this moving along plateaus, nothing prevents us from revisiting the same solution repeatedly. And particularly, if we use steepest descent, it'll actually force us to do this. All right, so let's do a steepest descent search in this local space with a plateau. So, we're going to start here as usual on our steepest descent search. We'll look at all of our neighbors. We'll find the best downhill move. We'll move to there. Again, we'll get here. We'll look at all our neighbors. We'll find the best downhill move, those two equal ones. We'll move to there. Now here we are, we'll look at all of our neighbors. And the best neighbor is this one of equal value. So I move there because we're allowed to move across plateaus. Now, when we're here, of course, we find two neighbors of equal value, the ones above and below, and we can choose to go back where we were. Again, if we're here, we'll look at our local neighbors, then the best move is that one. Again here, we look at two local neighbors are equally good, up or down, so we could go up. Again here, there's two local neighbors of equal value, up or down, we could go down. And you can see, what happens is that there's these four values of equal value in the search space, and I can just cycle up and down them forever, just cycling around. Okay, so, there's a problem. So we're going to need a way of escaping these plateau. And the simplest way of doing this is a steepest descent search with the Tabu list. And we're just look at the simplest form of Tabu list for the moment, and that is, we're going to just keep track of the last k places we were in our local search. And we're not going to allow ourselves to move back to one of those previous places, the last k places we've been. And we're still doing steepest descent. So, we're going to look at all of our neighbors, but the best one of them, even if it's uphill, we'll just do that. So the idea is we're doing a steepest descent, but somehow, we'll do uphill, and we know because we've got a Tabu list that we won't just go back to where we came from. So, even though we're doing an uphill move, it should be some more exploration. So this is a very simple kind of Tabu list where you just keep the last k places you were. In general, you can keep track of some feature of the previous places you were. For example, which variable changed? And you avoid making a move which uses the same feature. Unless, of course, that leads to a better global solution. So, there are more general forms of Tabu, we just used the very simplest form here. So, if we can do the same search again, now, with the Tabu list. So we're used to just use a Tabu list of one, so we just keep track of the last move, the last position we're in, and not allow us to go back to that last position. So let's see what happens in our search. So, it starts of here, we look at our neighbors, we find the best one so far, is here, we move down. Okay, and we actually also update the best so far. Now, this previous which where we are becomes Tabu, we can't go back to there. So we look at all of our neighbors basically accept things in the Tabu list except that red one, and we'll find a better position. So it's here, and we'll move there. And we'll update the objective because we've actually found a better solution. Now, the Tabu list is now this guide, the last one we've moved to. We look at all our neighbors except the Tabu one. We're going to find the best one, which is this one here. So we're going to move there, and we get the same objective, so the best so far hasn't been updated. So now, we're in our plateau, and notice, we now said that this move is Tabu, so we can't go back. So we look at all of our neighbors and the only thing we can go is straight up. So we go there, and now again, we look at all of the neighbors here, and this is the best neighbor so far. Going up because the going down is Tabu, so we get up there. And now, the interesting thing happened. So now, the best neighbor would be this one but it's Tabu, so we're not going to consider it. So, we have to do something else. So we're going to move here in this example. That's one of the equal two best ways we could move. And now, again, this is Tabu. We can't go back to this, we have to try something else. So we move here. We have to try something else here. We move there, all right? So we can't move back there, but now, we are following a move which are going downhill, and we get to this point where it is in fact the global minima. All right. Notice also that we kept track of the best move we've found so far. So basically, we know we're going to explore further, right? And if we don't have resource exhaustion, we're going to keep trying. But we're going to keep track of the best solution we've found so far, and make use of that to finally answer the question. So, we will, in practice, in Tabu search, find a good solution and we'll return to it because you'll get out of that Tabu list, and eventually come back to it. But the good thing is, this Tabu can escape or avoid these local minima. So there's lots of questions in how to use it. So for example, how long a Tabu list should you have? So if it's very long, then it's expensive to store, and every time you have to check or move versus all of those possibilities, and that can prevent improvement because you can't move easily enough. And if it's too short, then you can't escape deep local minima. So you always have another way back down into the local minima. So you need a large enough Tabu to prevent you to sort of getting back to where you were. And then, what it stored in Tabu list? So, the simplest thing is just valuations but you can have features. So remember, valuations are big because it's the entire solution. So features could be better, but then it's which ones, and if you are not careful, then if you have the wrong set of features, that can prevent improvement. So, Tabu search is itself a well studied area, and lots of things to learn about individual ways of doing Tabu search in the best possible way. So, if we come to now look at three different methods for escaping local minima, and this is critical for any local search method. Local search method will always find itself in a local minima. So we need some way of escaping it. And we've seen three, but notice you can also use them in combination. And so, all of these tricks can be used for the same local search if you want. So restart search is very simple. Just restart, jump to somewhere else, and try again. Similar, annealing was basically occasionally go back up the hill to find worse solutions. And then Tabu search is, in general, very useful to just not revisit the last k places you've been. And so, all of these techniques will be used for improving local search, allowing it to escape local minima, and therefore, find a better solution than the first local minima it comes across.