Product units in the hidden layer of multilayer neural networks provide a pwerful mechanism for neural networks
to efficiently learn higher-order combinations of inputs. Training product unit neural networks using local optimization algorithms is difficult
due to an increased number of local minima and increased chances of network paralysis. This research investigates the problems using
local optimization, especially gradient descent, to train product unit neural networks, and shows that particle swarm optimization, genetic
algorithms and leapfrog are efficient alternatives to successfully train product unit neural networks. Architecture selection, i.e. pruning, of
product unit neural networks is also studied and a pruning algorithm developed.