0

I am trying to perform a backward pass through my network and I don't want to update my network weights of my network when, I do a backward pass.

output = net:forward(input)
err = criterion:forward(output, label)
df_do = criterion:backward(output, label)
net:backward(input, df_do)

I'm assuming this can be done using by any of the two methods

accGradParameters(input, gradOutput, scale)
accUpdateGradParameters(input, gradOutput, learningRate)

Can,I do this using the optim package in torch?

Tom
  • 375
  • 1
  • 4
  • 13
  • Possible duplicate of [Finetune a Torch model](http://stackoverflow.com/questions/37459812/finetune-a-torch-model) – Manuel Lagunas Mar 28 '17 at 09:20
  • This is not a duplicate, I know about finetuning a network.However, I would like to perform backpropogation without updating the weights for a network – rahul_raghavan Mar 29 '17 at 16:56

0 Answers0