NetLogo User Community Models

Download If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled because this model uses extensions.)

## WHAT IS IT?An abstract implementation of a local search algorithms--First-choice gradient ascent (FCGA) with several variations.

Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.

Local search algorithms can be used to approximate solutions to NP-hard/complete problems.

I use two examples to show how to use my abstract gradient ascent implementations.

Using a form of stochastic local search called First-Choice Gradient Ascent, search the statespace of Jumping Mazes (RJMs) and hill climbing for a given number of iterations and print/display the best maze/highest peak evaluated according to the objective function specified by the objective function.

Given a number of iterations from the user, first generate a random problem and evaluate it according to the objective function. Then for the given number of iterations:

Go to a neighboring solution stateRe-evaluate the objective function on this new solution.

If the objective function has not decreased (worsened), accept the change, and furthermore remember this solution if its evaluation is the best (maximum) evaluated so far. Otherwise, reject the change and revert the cell to its previous solution.Finally, print/display the solution with the best (maximum) objective function evaluation that it found.