This project is divided into two main parts: In the first part, we built a neural network that implements the MLP algorithm (Multilayer Perceptron). The basic implementation of this network is built upon an implementation made by Professor Olivier Temam.

This implementation was further adjusted in a way it will fit our needs. On this part we firstly built a naïve implementation for the network, and then further developed it in order to better utilize the capabilities of the parallel processor. For that we introduced several stages of parallelization, in order to achieve substantial speed against the original serial implementation base line. Throughout the entire process we tested and ran those stages on different network layouts, and on different flavors of inputs.