Introduction

Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by the social behavior of birds. The algorithm is very simple but powerful. Here, I'm going to show how PSO can be used to minimize functions. Thus, PSO can be used as a training method for artificial neural networks or to minimize/maximize other high dimensional functions. In the example shown, a function R² -> R is minimized. Videos of the exploration by "virtual birds" can be watched at YouTube:

Background

To understand the algorithm, it is best to imagine a swarm of birds that are searching for food in a defined area - there is only one piece of food in this area. Initially, the birds don't know where the food is, but they know at each time how far the food is. Which strategy will the birds follow? Well, each bird will follow the one that is nearest to the food.

PSO adapts this behaviour and searches for the best solution-vector in the search space. A single solution is called particle. Each particle has a fitness/cost value that is evaluated by the function to be minimized, and each particle has a velocity that directs the "flying" of the particles. The particles fly through the search space by following the optimum particles.

The algorithm is initialized with particles at random positions, and then it explores the search space to find better solutions. In every iteration, each particle adjusts its velocity to follow two best solutions. The first is the cognitive part, where the particle follows its own best solution found so far. This is the solution that produces the lowest cost (has the highest fitness). This value is called pBest (particle best). The other best value is the current best solution of the swarm, i.e., the best solution by any particle in the swarm. This value is called gBest (global best). Then, each particle adjusts its velocity and position with the following equations:

v' = v + c1.r1.(pBest - x) + c2.r2.(gBest - x)
x' = x + v'

v is the current velocity, v' the new velocity, x the current position, x' the new position, pBest and gBest as stated above, r1 and r2 are even distributed random numbers in the interval [0, 1], and c1 and c2 are acceleration coefficients. Where c1 is the factor that influences the cognitive behaviour, i.e., how much the particle will follow its own best solution, and c2 is the factor for social behaviour, i.e., how much the particle will follow the swarm's best solution.

The algorithm can be written as follows:

Initialize each particle with a random velocity and random position.

Calculate the cost for each particle. If the current cost is lower than the best value so far, remember this position (pBest).

Choose the particle with the lowest cost of all particles. The position of this particle is gBest.

Calculate, for each particle, the new velocity and position according to the above equations.

Repeat steps 2-4 until maximum iteration or minimum error criteria is not attained.

It's quite a simple algorithm, but very powerful. This is the kind of algorithm that catches my fascination.

Implementation

Abstract implementation

The code does nothing more than what was stated in the above algorithm. At first, I'll show the abstract implementation of the particle swarm and the particles. For brevity, the properties are omitted, they can be seen in the above class diagram (or in the attached source). The concrete implementation for minimizing the function is shown afterwards.

The full example for minimizing the function R² -> R is attached. It's basically the same code that was used to create the plot view video.

Points of interest

The "Particle swarm optimization" algorithm is quite similar to genetic algorithms and can be used for similar problems. I found out that PSO is quite good in training feedforward neural networks. The algorithm allows for parallelization which can boost the execution speed. A swarm size of 50 is sufficient in most cases, and therefore the algorithm uses less resources compared to a genetic algorithm (where the population can be 1000+).

Have a look at my blog for some additional information on other meta heuristic algorithms.

Comments and Discussions

I have problem building solution in VS2013 too, so I have deleted the reference "gfoidl.Parallelization" and "gfoidl.ComputationalIntelligence", and renamed namespace "ParticleSwarm.FunctionMinimizing.Properties" in file Resources.Designer.cs and Settings.Designer.cs to something else which do not include the string "ParticleSwarm" because this string had already used as TYPE. I have also changed the code in line 242 from "i,j," to "j, i," because while I was building the solution, there was a exeception saying "An unhandled exception of type 'System.ArgumentOutOfRangeException' occurred in System.Drawing.dll", by examing the code, I have found that variable j correspond with the width of bitmap and i correspond with the height of bitmp, and when they were used in the method SetPixel(), the correct format should be "bmp.SetPixel(i, j, ...)" rather than the usage "bmp.SetPixel(i, j, ...)" in the code.

After that, everything is OK and solution is successfully built. Hope my experience will help someone in trouble

I wrote basically a different set of approaches:http://www.codeproject.com/Articles/335565/Intro-Numerical-SolutionsWhich does some different cool things trying to solve the same sort of function.I also did hierarchical grid searches.

In theory using swarms, the "birds" would lock onto and cluster about local minima.

In "bird" sense suppose you had a few pieces of food (near perfect solutions) but the birds can barely see the food (no steep decent to the solution), then the super cool thing about the approach is you'd slowly concentrate onto likely minima locations and then one or more birds would lock onto very good solutions.

In that sense unlike grid searches, you'd lock onto it faster, and deal with local minima, and find the ideal, much faster than I was!

After reading some papers about this topic, I'm sure that the code part above is a bug in this implementation. Because the convergence depends on the optimum (exactly: the positions of the particles, which are hopefully near the optimum).

The optimization model works fine with simple functions, however, the results are very random with more complex functions. In the list of sample functions given in the source code BumpsFunction is one of the functions with random minimized values. Try to run it and you get completely different results each time. I expect the situation to be even worse if you have functions with more than 2 variables.

the code compiles, and is correct (at least at the time i wrote the article ).Are you sure that you didn't change anything?

BTW: These errors belong to fundamental knowledge of a (C#-) programmer, and therefore this should not be discussed in the comments section of this article. It's neither an introduciton to building with VS, nor a forum. Thx.

I had the same issue i renamed the two namespaces ParticleSwarmDemo.FunctionMinimizing and gfoidl.ComputationalIntelligence to something else then back to their original name everything worked probably a bug in vstudio 2010 backword comatibility.

I haven't compared my code to the Matlab-solution, but I think and believe my code is faster. With the parallelization features of .net 4.0 you can improve the speed even more, and so it should be definitely faster.

My name is jo, i'm from malaysia. currently i study in unimas n i'm in my final year. i have to do a project in order to complete my degree. my project is about PSO algorithm for antenna design. i have to develop a pso program and show it to my supervisor of the project. the problem is until now i couldnt understand and figure out how to write the program. please help me and show step by step how to develop the program. what is the easier way to develop a PSO program?using MATLAB or other software?thanks guys.