C# Perceptron Tutorial

Intro

The Perceptron is basically the simplest learning algorithm, that uses only one neuron.
An usual representation of a perceptron (neuron) that has 2 inputs looks like this:

Now for a better understanding:

Input 1 and Input 2 are the values we provide and Output is the result.

Weight 1 and Weight 2 are random values - they’re used to adjust the input values so the error is minimum. By modifying them, the perceptron is able to learn.

The Bias shoult be treated as another input value, that always has the value of 1 (bias = 1). It must have it’s own weight -> weight 3.

To learn, perceptron uses supervised learning: that means, we need to provide multiple inputs and correct outputs so the weights can be adjusted correctly. Repeating this process will constantly lower the error until the generated output is almost equal with the desired output. When the weights are adjusted, the perceptron will pe able to ‘guess’ the output for new inputs.

How the perceptron works

One thing that you must understand about the perceptron is that it can only handle linear separable outputs.
Let’s take a look at the following image:

Each dot from the graphic above represents an output value:

red dots

shall return 0

green dots

shall return 1

As you can see, the outputs can be separated by a line, so the perceptron will know, using that line, if he has to return 0 or 1.

However that line must be positioned correctly so it separates the 2 outputs, here is where weights and bias are used: