DEV Community

Cover image for A reduced explanation on neural networks
Andrew Garcia
Andrew Garcia

Posted on • Updated on

A reduced explanation on neural networks

In a linear fit, data is fitted to a linear equation, typically to the slope-intercept form:

y=w1x+w0y = w_1 x + w_0


A linear fit (red) of the data (in blue) with the above equation. Such equation is better known as y = m x + b, but it has defined as above for context.

Likewise, for a multi-linear equation with a total number of DD independent variables, the expression is:

y=w1x1+w2x2++wDxD+w0y = w_1 x_1 + w_2 x_2 + \cdots + w_D x_D + w_0

Either of the above equations can be fitted by changing the wiw_i weights in a way that minimizes the error between the data points and the theoretical points generated by the equation. This error metric is typically known as a loss function or cost function.

In the case of deep learning, the equation is more complex. The multilayer perceptron (MLP) model is used as an example here for simplicity, as it may be one of the simplest neural networks to explain. Its architecture is shown below:

A multilayer perceptron (MLP) neural network.

In here, each column is a layer of vertical nodes, where the first layer from the left has nodes containing the input xix_i variables containing the information of each ii pixel in this example. The layers residing between the first and last layers in the network are known as hidden layers as the flow of operations in these sectors is typically not disclosed by the machine and is thus hidden to the user. The nodes in these hidden layers are also typically known as neurons as they handle most of the wiw_i coefficients adjustments during the training phase. Finally, the last layer contains the output nodes, which give rise to the dependent variables of the model.

It is helpful to get a notion of the connectivity of this topology to understand how the network operates. Here, the connections to 2 different nodes of the above network will be explored. First, let's look at the first node of the first hidden layer:

Image description

The connections of the 4 input variables to the first node of the first hidden layer is a linear combination of the first four variables to such node, and can be expressed as:

n1l1=w1,n1l1x1+w2,n1l1x2+w3,n1l1x3+w4,n1l1x4n_1 l_1 = w_{1,n_1 l_1}x_1 + w_{2,n_1 l_1}x_2 + w_{3,n_1 l_1}x_3 + w_{4,n_1 l_1}x_4

Now let's look at the connections for the following node in the second hidden layer:

Image description

The connections of the 6 hidden nodes from the first hidden layer to the first node of the second hidden layer is a linear combination of the latter 6 hidden nodes. This new node can then be defined by the below equation. The node previously defined in the above equation has been bolded for context.

n1l2=wn1,n1l2n1l1+wn2,n1l2n2l1+wn3,n1l2n3l1+wn4,n1l2n4l1+wn5,n1l2n5l1+wn6,n1l2n6l1n_1 l_2 = w_{n1,n_1 l_2} \bm{n_1 l_1} + w_{n2,n_1 l_2} n_2 l_1 + w_{n3,n_1 l_2} n_3 l_1 + w_{n4,n_1 l_2} n_4 l_1 + w_{n5,n_1 l_2} n_5 l_1 + w_{n6,n_1 l_2} n_6 l_1

”MLP neural networks are linear combinations of recursively-nested linear combinations.”

The number of nested operations depend on the depth [the number of layers of the network] and the number of variables and nodes in the system.

In a similar manner as with the linear equation fit, deep learning can be seen as the process done to minimize the cost or loss function of a deep learning neural network given the inputs and the shape of the network.

Images

Image classification and image recognition problems are widely taught in deep learning. The popular image recognition problem of "cat and dog" identification from a library of these animals occurs through a process analogous to the following heuristic.

For this example, let's use the following figure of a puppy as a "shiba.png" file (right-click Save Image As...):


Left-click above image for a graphical representation of the "dog" array

The .png image may be imported to Python in the following way:

import numpy as np
import cv2 as vision       #import computer vision module 
#read color image and declare it as (LxWx(rgb_color_channels(3))) array 
dog = vision.imload("shiba.png")   
# we can also decrease the complexity of the image by looking at the pixel intensities of the first channel, the "red" channel: 
dog = dog[:,:,0] 
dog = dog.reshape((*dog.shape[:2]))  # L X W array
Enter fullscreen mode Exit fullscreen mode

The resulting image is now a 180 x 180 pixels array. To correlate the image in an analogous manner to the linear regressions made with the slope intercept form equation, each of these pixels must be made dependent to the inputs in the neural network. This can be done by vectorizing the above image array with the "flatten()" operation:

X = dog.flatten()
Enter fullscreen mode Exit fullscreen mode

The X\mathbf{X} variable above is thus a 1-D vector with a length of 32,400 pixels.

Click above image for a graphical representation of the vectorized "dog" array

After this transformation, each pixel can be more easily assigned to each of the nodes on the dense layer(s) of the neural network, which is fitted with a weight wiw_i . The vector W\mathbf{W} containing these weights must be of the same size as the number of pixels of this vectorized image. As an example, let's make these weights very small random numbers:

W = np.random.random(dog.shape[0])/10000
Enter fullscreen mode Exit fullscreen mode

The value of a single yy data point is then the dot product between this vector and the vectorized image:

y = np.sum(W*X) 
print(y)
Enter fullscreen mode Exit fullscreen mode

This value is analogous to calculating a singular yy value with the multi-linear equation shown at the beginning of this post with a random set of wiw_i weights. As this value is only a single data point, it is evident that more than a single image is needed to make a proper fit for a cat-dog identification model.

Having the network properly identify this image as the drawing of a dog or cat will require passing the neural network through many images containing dogs as well as cats, allowing the network to adjust the weights associated with each pixel in the images to generate an adequate yy variable . In order to classify qualitatively, a logistic function (a "sigmoid") is typically used with the regression.

This is a reduced explanation on a simple neural network model. For a deeper understanding on this concept, here are some good online references:

-Andrew

Latest comments (0)