Machine learning – Neural network function approximation tutorial

In this tutorial, we will approximate the function describing a smart sensor system. A smart sensor consists in one or more standard sensors, coupled with a neural network, in order to calibrate measurements of a single parameter.

We will consider the voltage v1 and v2 coming from two solar cells, in order to assess the location of an object in one dimension y:

In this tutorial, we will use a Feedforward Backpropagation Networks training method that is different than the Bayesian method described in the case study of the book. This explains why we obtain different results.

Data

Our dataset consists in using the following voltage measurements for 67 positions of the ball:

Architecture of the network & training method

In order to get a mapping of the position of the ball, we will use a feedforward backpropagation algorithm of 10 neurons in one hidden layer. The network architecture is described as follow, with a Tan-Sigmoid activation function in the hidden layer, and a linear activation function in the output layer.

Launching the following commands, modifying the ann_FFBP_gd function in order to get the mean square errors evolution and selecting a training set of 57 measurements BP1 and BT1, keeping 10 test measurements for training assessment, we have: