0

I am using mxnet to train a VQA model, the input is (6244,) vector and the output is a single label

During my epoch, the loss never change but the accuracy is oscillating in a small range, the first 5 epochs are

Epoch 1. Loss: 2.7262569132562255, Train_acc 0.06867348986554285
Epoch 2. Loss: 2.7262569132562255, Train_acc 0.06955649207304837
Epoch 3. Loss: 2.7262569132562255, Train_acc 0.06853301224162152
Epoch 4. Loss: 2.7262569132562255, Train_acc 0.06799116997792494
Epoch 5. Loss: 2.7262569132562255, Train_acc 0.06887417218543046

This is a multi-class classification problem, with each answer label stands for a class, so I use softmax as final layer and cross-entropy to evaluate the loss, the code of them are as follows

So why the loss never change?... I just directly get if from cross_entropy

trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.01})
loss = gluon.loss.SoftmaxCrossEntropyLoss()

epochs = 10
moving_loss = 0.
best_eva = 0
for e in range(epochs):
    for i, batch in enumerate(data_train):
        data1 = batch.data[0].as_in_context(ctx)
        data2 = batch.data[1].as_in_context(ctx)
        data = [data1, data2]
        label = batch.label[0].as_in_context(ctx)
        with autograd.record():
            output = net(data)
            cross_entropy = loss(output, label)
            cross_entropy.backward()
        trainer.step(data[0].shape[0])

        moving_loss = np.mean(cross_entropy.asnumpy()[0])

    train_accuracy = evaluate_accuracy(data_train, net)
    print("Epoch %s. Loss: %s, Train_acc %s" % (e, moving_loss, train_accuracy))

The eval function is as follows

def evaluate_accuracy(data_iterator, net, ctx=mx.cpu()):
numerator = 0.
denominator = 0.
metric = mx.metric.Accuracy()
data_iterator.reset()
for i, batch in enumerate(data_iterator):
    with autograd.record():
        data1 = batch.data[0].as_in_context(ctx)
        data2 = batch.data[1].as_in_context(ctx)
        data = [data1, data2]
        label = batch.label[0].as_in_context(ctx)
        output = net(data)

    metric.update([label], [output])
return metric.get()[1]
Litchy
  • 355
  • 1
  • 4
  • 18

1 Answers1

0

Question asked and answered on the mxnet discussion forum here. No need use the autograd.record scope to record computational graph when computing the accuracy. Try instead:

def evaluate_accuracy(data_iterator, net, ctx=mx.cpu()):
    metric = mx.metric.Accuracy()
    data_iterator.reset()
    for i, batch in enumerate(data_iterator):
        data1 = batch.data[0].as_in_context(ctx)
        data2 = batch.data[1].as_in_context(ctx)
        data = [data1, data2]
        label = batch.label[0].as_in_context(ctx)
        output = net(data)
        metric.update([label], [output])
    return metric.get()[1]
sad-
  • 101
  • 2
  • bug was that 'data.iterator.reset()` was missing from training loop at the start of each epoch – sad- Sep 10 '18 at 19:01