There are many versions of PSO such as the hybrid ones where PSO is used along with other algorithms (such as Simulated Annealing). But in general, when we are talking about a pure PSO algorithm we would recognize it as being a single or multi-objective that operates on a discrete or continuous space. The objectives of the algorithm are the things that PSO try to find a solution for. For example, PSO might concentrate on reducing the power consumption of a device without taking into consideration anything else, like the speed of the device. That’s why a multi-objective version was developed to kind of try to balance the solution. Balancing the work of the algorithm to consider more than one objective is part of the Game theory field (if you are curious and want to know more look at Nash Equilibrium and Pareto Optimality).

The original algorithm work on a continuous space, what that means is that it works to solve problems such as the numeric minimization problem I mentioned in my previous post. PSO also works on a discrete binary space, which means that the algorithm is used to find a $ 0 $ or $ 1 $ values for the given problem (a simple example can be found in my paper). But now, let’s go back to our simple minimization problem and try to solve it using PSO. At this point if you didn’t read my other blog post please do so to know what I am trying to solve here.

PSO starts by creating a swarm of particles where each particle is a possible solution to the problem. Therefore, we need to understand what exactly we are trying to solve and how to map it to the objective function of PSO, which is considered the hardest part when designing the algorithm.

Let’s first define few global variables needed throughout our program

# the x and y in our function (x - y + 7) (aka. dimensions)number_of_variables=2# the minimum possible value x or y can takemin_value=-100# the maximum possible value x or y can takemax_value=100# the number of particles in the swarmnumber_of_particles=10# number of times the algorithm moves each particle in the problem spacenumber_of_iterations=2000w=0.729# inertia weightc1=1.49# cognitive (particle)c2=1.49# social (swarm)

classParticle:def__init__(self,number_of_variables,min_value,max_value):# init x and y valuesself.positions=[0.0forvinrange(number_of_variables)]# init velocities of x and yself.velocities=[0.0forvinrange(number_of_variables)]forvinrange(number_of_variables):# update x and y positionsself.positions[v]=((max_value-min_value)*random.random()+min_value)# update x and y velocitiesself.velocities[v]=((max_value-min_value)*random.random()+min_value)# current fitness after updating the x and y valuesself.fitness=Fitness(self.positions)# the current particle positions as the best fitness found yetself.best_particle_positions=list(self.positions)# the current particle fitness as the best fitness found yetself.best_particle_fitness=self.fitness

Before I explain what each line means I should first explain how a particle behaves in the problem space then get back to our code snippet above.

As I mentioned before, each particle in the swarm represents a possible solution to the problem. And as I also mentioned in the previous blog post, each particle try to improve its solution by learning from two sources: its movement in the problem space and the movement of the other particles of the swarm (through learning from the best solution found by any of the other particles). To watch a simple visualization of PSO click on the image below:

Now, let’s see how that translates into code. The positions list in the above code snippet represent the current values of the variables in the objective function ($ x $ and $ y $) where the velocities list represent the (artificial) velocities (for each position) of the particle in space. We first initialize the values to zeros then update them using random numbers as follows:

((max_value-min_value)*random.random()+min_value)

Each particle in our swarm keep track of its fitness value and the best positions and fitness found by any particle of the swarm (including itself). Where a particle fitness is the solution it achieved by plugging the current positions list values in the objective function (in our example problem, $ positions[0] = x $ and $ positions[1] = y $). Notice that during initialization we consider the particle’s fitness and positions as the best ones found yet because it might be the case, later we will check that and update with the correct information during each iteration of the algorithm.

After the initialization of the swarm, we check all particles and find the best solution found and keep track of that using the two variables best_swarm_positions and best_swarm_fitness.

Now, we are ready to start moving particles in the problem space by generating new velocities to find new positions ($x$ and $y$ values) and eventually find new solutions (fitness values). The fitness value of each particle in the swarm is going to be updated during each iteration of the algorithm.

Then, inside the nested loops, we start updating the velocities and positions and calculate the new fitness while keeping track of the best fitness of the swarm. But first, let’s start by updating the velocities as follows:

# compute new velocities for each particleforvinrange(number_of_variables):particle.velocities[v]=calculate_new_velocity_value(particle,v)

For each variable in the objective function, we should calculate a new velocity to later aid in calculating a new set of positions. That is done by calling calculate_new_velocity_value() function and passing the current particle and the variable number to it as follows:

# calculate a new velocity for one variabledefcalculate_new_velocity_value(p,v):# generate random numbersr1=random.random()r2=random.random()# the learning rate partpart_1=(w*p.velocities[v])# the cognitive part - learning from itselfpart_2=(c1*r1*(p.best_particle_positions[v]-p.positions[v]))# the social part - learning from otherspart_3=(c2*r2*(best_swarm_positions[v]-p.positions[v]))new_velocity=part_1+part_2+part_3returnnew_velocity

As shown in the above code snippet, the value of the new velocity is the sum of all of the following three parts:

The learning rate part:

The multiplication result of the inertia weight parameter ($w$) and the particle’s current velocity.
The inertia weight parameter influences the convergence of the algorithm and the exploration of its particles. Therefore, a well-defined inertia weight is very important in influencing the quality of the solution found by PSO. The higher the inertia weight means bigger steps in the problem space (in other words, higher velocities). There are many types of inertia weights but we use in this example the fixed inertia weight (a static value) which do not change throughout the iterations. To learn more about other kinds of inertia weights read section 2.3.4 in my master thesis. Also, check how I used the simulated annealing and how it helped PSO in achieving better results in my paper (Cloudlet Scheduling with Particle Swarm Optimization).

The cognitive part:

This part of the equation is the multiplication result of the constant c1 and the random number r1 and the subtraction of the position value that corresponds to the best fitness found by the particle and the current position value.
The overall idea of this part of the equation is to represent the cognitive (self-learning) part of the particle.

The social part:

The multiplication result of the constant c2 and the random number r2 and the subtraction of the position value that corresponds to the best fitness found by any particle of the swarm and the current position value of the particle.
The overall idea of this part of the equation is to represent the social ability of the particle (learning from the swarm).

To keep our velocities within our desired range we use the following few lines of code:

Finally, we are ready to calculate the new fitness value using the objective function. As apparent in the following code snippet, we plug the two positions values in the objective function which corresponds to the $ x $ and $ y $ value in the original equation $ x-y+7 $.

# compute the fitness of the new positionsparticle.fitness=Fitness(particle.positions)defFitness(positions):# x - y + 7returnpositions[0]-positions[1]+7

At the end of each iteration, we evaluate the quality of the newly calculated fitness value and use it to do two kinds of updates if it is of a high quality. The first is to update the value of the best fitness found by the particle we are moving and the second is to update the value of the best fitness found by any particle of the swarm. Remember that the whole point of using PSO is to find the values of $ x $ and $ y $ such that we minimize the value of the whole function. Therefore, the best solution to the problem would be $ -100 - +100 + 7 $ which equals to $ -193 $ and PSO would be able to find the correct solution by the end of the iterations.

You can also download the full code and play with it yourself. And with that, I finish this post. I hope you learned something and if you have any question don’t hesitate to comment below and I will get back to you ASAP. Happy learning!