I am trying to perform a backward pass through my network and I don't want to update my network weights of my network when, I do a backward pass.
output = net:forward(input)
err = criterion:forward(output, label)
df_do = criterion:backward(output, label)
net:backward(input, df_do)
I'm assuming this can be done using by any of the two methods
accGradParameters(input, gradOutput, scale)
accUpdateGradParameters(input, gradOutput, learningRate)
Can,I do this using the optim package in torch?