If you just want the code, you can get it here
The perceptron, first published in 1957, is seen as the first occurrence of a Neural Network and Machine learning. It was able to differentiate certain shapes like squares from circles.
Rosenblatt had the intention to build a copy of the human brain, using simple hardware such as photocells and potentiometers. The photocells (or input neurons in this case) were connected to a simple output neuron, which could display either true or false. Between the photocells and our output neuron Rosenblatt placed potentiometers which were used as weights.
To learn, the machine took in the picture and delivered its guess. If that guess was right, the potentiometers were not changed, but if it answered circle, even tough the answer was square, it would subtract a "learning step" value from the potentiometers whose photocells fired. It would increment the potentiometer values if the mistake was the other way around.
Rosenblatt asserted that it could differentiate cats from dogs, men from women and soon even be found in military use. The new York Times even assumed, that it was "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence". All of those statements were wrong, even tough we achieved most of the New York Times' predictions by now. Rosenblatts machine was very limited, due to only having a single layer of depth. It could differentiate shapes and certain letters, but that was it.
Let us first get the unimportant code out of the way. Lines 1-42 are simple setup code, such as listening when the mouse is down, drawing the input grid, defining the default values of the variables etc.
function weightsHandler() on lines 98-137 has been assigned with that task. The function changes all the neurons' colours, depending on what their weight is. The colours show if that neuron tends to gravitate towards a circle or a square.
neuronsstores boolean values on if the input neurons are active or not.
weightsstores in decimal numbers the weights of their corresponding neuron. It can either be positive or negative.
leanStepsis the amount of increment or decrement used when adjusting a weight.
desiredOutputis the desired output of the algorithm.
function executeAlgorithm() on line 70 is where all the magic begins.
First it executes another
function run(), which takes the active neurons, multiplies them by their weight and returns true if the sum is above or equal to zero. On line 72 we check if it is true or false, true being a cricle.
If the answer did not satisfy our desired answer, we call our
function improve() (defined on line 44) to adjust the weights. This function runs trough all our neurons and checks if they are activated. If they are, we either add or subtract our learning step off of each weight, depending on the desired output.
After adjusting the weights the algorithm has finished its work and waits for another input, this time being a bit better at guessing the shape.
If you run trough this entire process often enough the perceptron actually gets quite good at differentiating squares from circles.
You could fork my GitHub repository and make your own improvements to the code. An important extension could be to create a way to feed it a lot of training data in the form of a .txt file, which instantly improves it a lot. If you think that you have a nice extension you can create a pull request and I will maybe integrate your code in the repository.