Degree Name

Department

Advisor(s)

Keywords

Subject Categories

Computer Engineering | Data Storage Systems

Abstract

A large number of problems can be cast as optimization problems in which the objective is to find a set of values for problem parameters that maximize or minimize an objective (fitness or cost) function. This work proposes the Multi-Phase Particle Swarm Optimization ( MPPSO ) algorithm, extending the Particle Swarm Optimization ( PSO ) algorithm recently proposed to solve optimization problems and allowing it to be applied to both continuous and discrete space optimization problems. The PSO algorithm falls into the Evolutionary Computation paradigm. It does not need gradient information, unlike most neural network and gradient descent algorithms. It evolves a population of individuals called "particles." Each particle moves around the search space, updating its velocity and position based on the best positions thus far discovered by itself and by other particles. The MPPSO algorithm extends PSO, utilizing multiple groups of particles with different goals that change with time, alternately moving toward or away from the best of the candidate solutions discovered thus far. It performs better and is less likely to be trapped in a local optimum than the PSO strategy in which all particles move towards the best solution discovered thus far. The MPPSO algorithm also enforces a steady improvement in solution quality, accepting only moves that improve fitness. This eliminates a considerable amount of wasted search effort that would be spent in relatively poor particle positions. The possibility of remaining stuck in local optima is further reduced by periodically reinitializing particle velocities, which is a more useful strategy than randomly reinitializing particle positions. Experimental simulations show that the proposed algorithm outperforms a genetic algorithm, an evolutionary programming algorithm, and a previous version of the Particle Swarm Optimization algorithm on several difficult benchmark problems in both discrete and continuous spaces. It managed to reach optimum fitness using fewer fitness evaluations and less computation time than the other algorithms. Another set of simulations showed that the algorithm is successful in training a two-layer Feedforward Neural Network, reaching lower error values than the Backpropagation algorithm.

Access

Surface provides description only. Full text is available to ProQuest subscribers. Ask your Librarian for assistance.