DEV Community

AbHiNaV_PrAkAsH_AP
AbHiNaV_PrAkAsH_AP

Posted on

Understanding the Backpropagation Algorithm

You might have been taking a course on deep learning and at the beginning of it, it seems to you very easy, and then you have encountered “backpropagation” and you will start your head scratching because it is too “mathsy.”

Alt text of image

why there is a need for understanding the Back Propagation algorithm for us?
The answer is very simple. We, human beings are always curious to know how things are happening in the real world.
For example, when we see human breathing. From outside it looks like it is simply the intake of air & release of CO2.

But, the curiosity of the human race led us, to know that, in this short interval of 1–2 sec.
The whole blood id first oxygenated & then it transported through nerves and reaches to every single cell of our body.
And finally, it deoxygenated.

Humans are always curious to know, how the mechanism which governs the things of the real world around us.
And for any Deep Learning practitioners our,” Our world revolves around neural networks”.
So, for the sake of killing your curiosity & and giving you the idea of how neural networks are trained.
I’m presenting you with this article.

The context of this article are:-

  1. Giving you a brief intro about the neural networks.
  2. Try to make you understand Back Propagation in a simpler way.
  3. And, finally, we’ll deal with the algorithm of Back Propagation with a concrete example.

Okay! So, first understand what is a neural network.

I don’t know you are aware of a neural network or not. So let us first understand this concept.

Alt text of image

A neural network is a series of algorithms that endeavors to recognize,
underlying relationships in a set of data through a process that mimics the way the human brain operates.
Neural networks can adapt to changing input; so the network generates the best possible result without needing to redesign the output criteria.

or in simple words, we can describe a neural networks as:-

-It’s just a computer program that learn and behaves in a remarkably similar way to human brains.

  • Or, it’s just a way how computers learn things, recognize patterns, and make decisions in a human like way.

  • Or, neural networks enable computers to learn from a given set of data.

  • Or, ( in a more elaborative way ) An ANN is a simulation of the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a human-like manner.
    . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .

In simpler words, Back Propagation is the central mechanism to train a neural network. In which we calculate the error of our desired targeted output value & then we adjust the weights in a way to minimize this error.
it’s similar to human being how we learn from our mistakes.

Try to imagine the situation. You are trying to hit a football into the goal post.
you randomly kick the football with some angle. Then you measure the distance from the goalpost to the spot where your football goes
in the first attempt.
Then we try to minimize this distance by changing the angle by which we kicked the football.
And finally, we’re able to hit the football into the goal post.

Now, compare this situation with a neural network:-

  1. we randomly kicked the football.
    In a neural network, we initialized the weights randomly.

  2. we measure the distance from the goal post & the spot where we hit the ball.’
    In a neural network, this is called measuring the total error.

  3. We changed the angle in such a manner to minimize this distance

In a neural network, this is called updating the weights in order to get the desired targeted output.

The Backpropagation algorithm is a very powerful algorithm in order to train a neural network.
it’s so powerful that it is used in Zip Code recognition(low-level example),Face recognition (mid-level example) to Sonar target recognition(high-level example).

I hope now you’ve understood Back Propagation.
So, now you are ready to deal with the mathematical stuff of Back Propagation.

. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .

Okay so, now jump into Backpropagation algorithm to understand it.

Alt text of image

This is a figure of a simple neural network having 2 layers i.e input, hidden and output layer, respectively. Each layer is having 2 neurons.

Alt text of image

The function of a neuron is, to sum up, all the multiplied inputs with its weight & the bias.
And the Output is followed by the operation of the activation function.

Let us understand Back Propagation with an example:

Alt text of image

Alt text of image

Here,H1 is a neuron and the sample inputs are x1=0.05,x2=0.10 and the biases are b1=0.35 & b2=0.60.
The targeted values are T1=0.01 & T2=0.22

Now we randomly initialize the weights,

Alt text of image

Note: In this whole article we’re using SIGMOID as an activation function.

Alt text of image

Let us calculate H1, H2 and output H1, output H2.

Alt text of image

Similarly, we can calculate y1,y2, output y1 & output y2.

Alt text of image

Calculating the total error

Alt text of image

Now we have to backpropagate, to upgrade the weights
Consider w5, Error at w5

Alt text of image

Splitting

Alt text of image

Partially differentiating each term one by one.

Alt text of image

Alt text of image

Alt text of image

Calculated error w5:-

Alt text of image

Now updating w5

Alt text of image

New updated weights,w5=0.3595 & similarly w6=0.4086,w7=0.511 & w8=0.561 .

Now at hidden layer,updating w1,w2,w3&w4.
Consider w1
Error at w1

Alt text of image

But there is no w1 term present in the expression of Etotal.
So, in order to do that, we have multiple splits.

Please pay attention and look it slowly!

The terms which are encircled, we can’t differentiate them directly. So, we have to split them.

Alt text of image
See the terms which are encircled

Consider the term which is encircled orange & let us split & apply the chain rule.

Alt text of image

Now calculating:

Alt text of image

Alt text of image

We have calculated the term which is encircled orange.

Alt text of image

Consider the term which is encircled in blue.

Alt text of image

Consider the term which is encircled in black.

Alt text of image

Now we’ve calculated all the terms which are encircled.
Therefore the error at w1 is:

Alt text of image

Updating w1:

Alt text of image

Now, we are having our updated weight w1 and similarly, we can calculate w2,w3 &w4. What we’ve done so far is we Back-Propagated and updated first w5,w6,w7,w8 and then with the help these we further Back-Propagated and updated the weights w1,w2,w3 & w4.

So with these updated weights(w1,w2,w3 & w4).We’ve to again calculate H1 & H2.After calculating H1 & H2 ,we can calculate y1 output & y2 output. After that, we can calculate the Total Error as we have done earlier, and again with the help of this new Total Error. We backpropagate & updated the weights.

Alt text of image

And again with the updated weights we forward propagate.

Alt text of image

We’ve to iterate over & over again between Back Propagation & forward propagation.

Alt text of image

Alt text of image

Alt text of image

Until the Total Error(cost function) is minimized or in other words, the value of our predicted outputs is closer to that of the target values.

In one sentence we can define backpropagation as it’s a common method of training a neural net in which the initial system output is compared to the desired output, and the system is adjusted until the difference between the two is minimized.

We can now say that backpropagation is the central mechanism to train any neural network.

I hope you’ve understood now what is the backpropagation algorithm and how it works.

Congratulations you’ve just understood one of the toughest “mathsy” topics of machine learning.

originally published here :

Please visit the above medium link and Don’t forget to give us your 👏 & follow me on medium!!!!!!!

Alt text of image

Also, this is my very first blog in dev.to . So kindly follow me and hit the like button.

Top comments (0)