3

I have been trying to fix this problem for several days, with no luck. I have been implementing a simple neural net with a single hidden layer from scratch, just for my own understanding. I have successfully implemented it with sigmoid, tanh and relu activations for binary classifications, and am now attempting to use softmax at the output for multi-class classifications.

In every tutorial I have come across for a softmax implementation, including my lecturer's notes, the derivative of the softmax cross entropy error at the output layer is simplified down to just predictions - labels, thus essentially subtracting 1 from the predicted value at the position of the true label.

However, I found that if this was used, then the error of my network would continuosuly increase until it converged to always predicting one random class with 100%, and the other with 0%. Interestingly, if I change this to labels - predictions, it works perfectly on my simple test of learning the binary XOR function below. Unfortunately, if I then attempt to the apply the same network to a more complex problem (hand-written letters - 26 classes), it again converges to outputting one class with 100% probability very quickly when either labels - predictions or predictions - labels is used.

I have no idea why this incorrect line of code works for the simple binary classification, but not for a classification with many classes. I assume that I have something else backwards in my code, and this incorrect change is essentially reversing this other error, but I cannot find where this may be.

import numpy as np


class MLP:

    def __init__(self, numInputs, numHidden, numOutputs):
        # MLP architecture sizes
        self.numInputs = numInputs
        self.numHidden = numHidden
        self.numOutputs = numOutputs

        # MLP weights
        self.IH_weights = np.random.rand(numInputs, numHidden)      # Input -> Hidden
        self.HO_weights = np.random.rand(numHidden, numOutputs)     # Hidden -> Output

        # Gradients corresponding to weight matrices computed during backprop
        self.IH_w_gradients = np.zeros_like(self.IH_weights)
        self.HO_w_gradients = np.zeros_like(self.HO_weights)

    def sigmoid(self, x):
        return 1 / (1 + np.exp(-x))

    def sigmoidDerivative(self, x):
        return x * (1 - x)

    def softmax(self, x):
        # exps = np.exp(x)
        exps = np.exp(x - np.max(x))                            # Allows for large values
        return exps / np.sum(exps)

    def forward(self, input):
        self.I = np.array(input).reshape(1, self.numInputs)     # (numInputs, ) -> (1, numInputs)
        self.H = self.I.dot(self.IH_weights)
        self.H = self.sigmoid(self.H)
        self.O = self.H.dot(self.HO_weights)
        self.O = self.softmax(self.O)
        self.O += 1e-10                                         # Allows for log(0)
        return self.O

    def backwards(self, label):
        self.L = np.array(label).reshape(1, self.numOutputs)    # (numOutputs, ) -> (1, numOutputs)
        self.O_error = - np.sum([t * np.log(y) for y, t in zip(self.O, self.L)])
        # self.O_delta = self.O - self.L                        # CORRECT (not working)
        self.O_delta = self.L - self.O                          # INCORRECT (working)
        self.H_error = self.O_delta.dot(self.HO_weights.T)
        self.H_delta = self.H_error * self.sigmoidDerivative(self.H)
        self.IH_w_gradients += self.I.T.dot(self.H_delta)
        self.HO_w_gradients += self.H.T.dot(self.O_delta)
        return self.O_error

    def updateWeights(self, learningRate):
        self.IH_weights += learningRate * self.IH_w_gradients
        self.HO_weights += learningRate * self.HO_w_gradients

        self.IH_w_gradients = np.zeros_like(self.IH_weights)
        self.HO_w_gradients = np.zeros_like(self.HO_weights)


data = [
    [[0, 0], [1, 0]],
    [[0, 1], [0, 1]],
    [[1, 0], [0, 1]],
    [[1, 1], [1, 0]]
]

mlp = MLP(2, 5, 2)

numEpochs = 10000
learningRate = 0.1

for epoch in range(numEpochs):
    epochLosses, epochAccuracies = [], []
    for i in range(len(data)):
        prediction = mlp.forward(data[i][0])
        # print(prediction, "\n")
        label = data[i][1]
        loss = mlp.backwards(label)
        epochLosses.append(loss)
        epochAccuracies.append(np.argmax(prediction) == np.argmax(label))
    mlp.updateWeights(learningRate)
    if epoch % 1000 == 0 or epoch == numEpochs - 1:
        print("EPOCH:", epoch)
        print("LOSS: ", np.mean(epochLosses))
        print("ACC:  ", np.mean(epochAccuracies) * 100, "%\n")
KOB
  • 4,084
  • 9
  • 44
  • 88
  • Could your softmax function be def softmax(x): return np.exp(x) / np.sum(np.exp(x), axis=0) – karthikbharadwaj Apr 26 '18 at 13:37
  • @karthikbharadwaj That does not work either - it results in a negative loss value. I think it should be `axis=-1`, which would only make a difference if my data was in higher dimensions anyway. – KOB Apr 26 '18 at 13:39

0 Answers0