-4

Considering a neural network with two hidden layers. In this case we have three matrices of weights. Lets say I'm starting the training. In the first round I'll set random values for all weights of the three matrices. If this is correct I have two questions about:

1- Should I do the training from the input layer to the right or otherwise?

2- In the second round of the trainging I have to apply the gradient descent on the weights. Should I apply on all weights of all matrices an after that calculate the error or apply it weight by weight checking if the error has decreased to go to the next weight and so on to finally go to the next training round?

1 Answers1

1

You need to be familiar with forward propagation and the backward propagation. In a neural network, first you initialize weights randomly. Then you predict the y value(let's say y_pred) according to the training set values(X_train). For each X_train sample you have y_train which is the true output(we say ground truth) for the training sample. Then you calculate a loss value according to the loss function, for simplicity let's say loss=y_pred-y_train (This is no the actual loss function it is a bit more complex than that). This is the forward propagation in short.

So you get the loss then you calculate the how much you need to change the weights in order to train your neural network in the next iteration. For this we use gradient descent algorithm. You calculate new weights using the loss value you get. This is the backward propagation in short.

You redo this steps multiple times and you will improve your weights from random to trained weights.

Gayal Kuruppu
  • 1,261
  • 1
  • 17
  • 29
  • So in the second round. I have calculate the gradient descent of all weights starting from the left matrix of weights to de right and then calculate all (w*x + b) for each neuron again and again? – Roger_88 Jun 27 '20 at 01:00