DEV Community

Discussion on: Explain neural networks Like I'm Five

Collapse
 
cess11 profile image
PNS11 • Edited

They come in many varieties but the most basic one is called a perceptron. It is based on an old idea about nerve cells and how they work, where it accepts data in in one end, does some computation with it and then delivers an interpretation in the other end. First you teach them, then you use them, sort of. The better the training, the better the result 'in production'.

To begin with it has some random ideas, usually expressed as numeric values. This is what is called weights, some numbers signifying something. When it encounters data in the training phase it multiplies or otherwise computes its own numbers with this data and the weights and then checks whether the result is the same as in the answers for the training data set.

If it fits it silently applauds itself and waits for more data. If it doesn't it adjusts its weights in the appropriate direction and then waits for more data.

When you have just one it isn't very exciting but if you translate e.g. images to numbers representing colour, brightness or something of each pixel and then train a matrix of these perceptrons on such image translations that you have also decided what they are or should mean to the net, then you can also use it to detect those same things and others that are similar if you only translate these new images to the data format that your perceptron flock knows how to make guesses about.

This is because you control thresholds for whether an answer from one of your perceptrons is OK or not given the data you supply, you could be somewhat lenient and see if it is enough for your use case and if it isn't you retrain your net with a more demanding and exact threshold.

The devil is in the details. It depends on what you want to do how you design and train your neural networks. The idea is however to produce lots and lots of 'nerve cells' that are basically the same but encapsulate different data values that are then used for interpretations, based on what you told them about their guesses in the training phase.

In picolisp a perceptron object could look something like this:

(class +Perceptron +Entity)
(rel w1 (+Number))
(rel w2 (+Number)) 
(rel res (+Number)) 
(rel set (+Joint) percs (+PercParent)) 

(dm train> (Data Truth)
    (let D (dostuff Data (: w1) (: w2))
        (if (= D Truth)
            (applaud This)
            (if (> D Truth)
                (and (=: w1 (adjustup (: w1))) (=: w2 (adjustup (: w2))) )
                (and (=: w1 (adjustdown (: w1))) (=: w2 (adjustdown (: w2))) ) ]

And then you'd probably write an 'interpret> method as well, which wouldn't adjust these values, instead it would just leave its result in a field for later collection or pass it on for further processing or something. It depends heavily on the application how these details might look, and I'm fairly certain the above won't work very well, besides it being quite chatty pseudocode.

Collapse
 
ikemkrueger profile image
Ikem Krueger

I have a hard time to get a grip on the lisp syntax.

Collapse
 
cess11 profile image
PNS11

It's actually not as much a syntax as a data structure notation.

If you point out where it slips I'll try to explain in better detail.