Particle Swarm Optimization (PSO) is a technique based on group behavior such as bird flocking. PSO can be used to find an approximate solution to a numerical optimization problem in situations where classical techniques like those based on Calculus derivatives don’t work or aren’t feasible. Training a neural network is an example of such an optimization problem; the goal is to find the set of values for a neural network’s weights and biases so that the error between computed outputs and known outputs on a collection of training data is minimized.

In the machine learning community, by far the most common technique used to train a neural network is called back-propagation. However, I generally prefer to use PSO.

Because PSO is conceptually quite a bit different from most traditional algorithms, in the VSM article, instead of demonstrating how PSO can be used to train a neural network, the article shows how to use PSO to solve a dummy benchmark problem of finding the values for x0 and x1 that minimize the function:

z = x0 * exp( -(x0^2 + x1^2) )

The graph of this function, and a screenshot of a demo program solving the minimization problem are shown in the images below. I intend to follow up the November article with an article that shows exactly how to use PSO to train a neural network.