0

i want to watch not only continuous error, which should be optimized during training, but also another not differentiable metrics(like top1 or top5 classification error) during training. Is it possible?

Example:

outputs = MyModel(inputs)
continuous_loss = some_loss(outputs, labels)
# it could return tensor with dimension different than continuous loss,
# which return only one scalar for batch
another_loss = some_another_loss(outputs, labels, ...)

optimizer = tf.RMSPropOptimizer(lr, momentum)
train_op = slim.learning.create_train_op(continuous_loss, optimizer, ...)
# this call is blocking and i can't run another op with session.run
slim.learning.train(train_op, logdir, ...)

What i needed it's simply redefine train_step_fn and pass to slim.learning.train array of [train_op, another_loss]

1 Answers1

0

There is no reason that you cannot create your own hooks into your graph to get an accuracy metric out. Even though you are using tf-slim you can still use summaries to get the information you want. To generate a top1/5 error you would need to to a non-training run() call and fetch your error summary, then write it to your summary writer.

Alternatively, if you just want something you can print() in python, you can do a fetch of your error on a validation set:

feeds={x_validation_data:x,y_validation_data:y}
fetches=[error,cross_entropy]
res=sess.run(fetches=fetches, feed_dict=feeds) 
error=res[0]

Without more info that's all i can understand from your question. If you can calculate the error in Tensorflow by passing a validation set then you can also fetch it out in your sess.run() call!

JCooke
  • 950
  • 1
  • 5
  • 17