0

I am teaching myself TensorFlow and am currently experimenting with different models for image classification in the tensorflow/models/slim repo. Following the tutorial there, I have fine-tuned a pre-trained inception_v2_resnet model and am trying to evaluate it. I was wondering if there would be any simple way to modify the eval_image_classifier.py script to print the labels of the images that it is classifying? This would help in adapting this script for use with a test set.

osama
  • 622
  • 2
  • 10
  • 19

2 Answers2

2

I know that this post is a little bit old, but I'am playing with tensorflow in this period. Maybe someone checking this post will find an answer here.

You can print within the evaluation loop the eval_op which can handle other data rather than only names_to_updates.values(). Originally it would be written as that:

eval_op = list(names_to_updates.values())

But you can change it to that:

eval_op = tf.Print(list(names_to_updates.values()), [predictions], message="predictions:", summarize=100)

An example output:

INFO:tensorflow:Evaluation [1/111]
I tensorflow/core/kernels/logging_ops.cc:79] predictions:[11 3 3 9]
INFO:tensorflow:Evaluation [2/111]
I tensorflow/core/kernels/logging_ops.cc:79] predictions:[8 10 3 7]

The numbers in the array after "Predictions:" are the number of the labels outputted.

In the same way you can output for example mislabeled image filenames as written here (How to get misclassified files in TF-Slim's eval_image_classifier.py?)

Dharman
  • 30,962
  • 25
  • 85
  • 135
KaneFury
  • 21
  • 5
1

The evaluate function in slim is the one responsible for actually calling session.run on the images, so there's the place you want to modify.

Alexandre Passos
  • 5,186
  • 1
  • 14
  • 19