1

Below is a forward pass and partly implemented backward pass of back propagation of a neural network :

import numpy as np

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

X_train = np.asarray([[1,1], [0,0]]).T
Y_train = np.asarray([[1], [0]]).T

hidden_size = 2
output_size = 1
learning_rate = 0.1

    forward propagation

w1 = np.random.randn(hidden_size, 2) * 0.1
b1 = np.zeros((hidden_size, 1))
w2 = np.random.randn(output_size, hidden_size) * 0.1
b2 = np.zeros((output_size, 1))

Z1 = np.dot(w1, X_train) + b1
A1 = sigmoid(Z1)

Z2 = np.dot(w2, A1) + b2
A2 = sigmoid(Z2)

derivativeA2 = A2 * (1 - A2)
derivativeA1 = A1 * (1 - A1)

    first steps of back propagation

error = (A2 - Y_train)
dA2 = error / derivativeA2
dZ2 = np.multiply(dA2, derivativeA2)

What is the intuition behind :

error = (A2 - Y_train)
dA2 = error / derivativeA2
dZ2 = np.multiply(dA2, derivativeA2)

I understand error is the difference between the current prediction A2 and actual values Y_train.

But why divide this error by the derivative of A2 and then multiply the result of error / derivativeA2 by derivativeA2 ? What is intuition behind this ?

Maxim
  • 52,561
  • 27
  • 155
  • 209
blue-sky
  • 51,962
  • 152
  • 427
  • 752

1 Answers1

1

These expressions are indeed confusing:

derivativeA2 = A2 * (1 - A2)
error = (A2 - Y_train)
dA2 = error / derivativeA2

... because error doesn't have a meaning on its own. At this point, the goal is compute the derivative of the cross-entropy loss, which has this formula:

dA2 = (A2 - Y_train) / (A2 * (1 - A2))

See these lecture notes (formula 6) for the derivation. It just happens that the previous operation is sigmoid and its derivative is A2 * (1 - A2). That's why this expression is used again to compute dZ2 (formula 7).

But if you had a different loss function (say, L2) or a different squeeze layer, then A2 * (1 - A2) wouldn't be reused. These are different nodes in the computational graph.

Maxim
  • 52,561
  • 27
  • 155
  • 209