2

I created a GoogleNet Model via Nvidia DIGITS with two classes (called positive and negative).

If I classify an image with DIGITS, it shows me a nice result like positive: 85.56% and negative: 14.44%.

If it pass that model it into pycaffe's classify.py with the same image, I get a result like array([[ 0.38978559, -0.06033826]], dtype=float32)

So, how do I read/interpret this result? How do I calculate the confidence levels (not sure if this is the right term) shown by DIGITS from the results shown by classify.py?

Shai
  • 111,146
  • 38
  • 238
  • 371
pogopaule
  • 1,533
  • 2
  • 10
  • 17
  • it seems like the `deploy.prototxt` you feed to `classify.py` is missing the last `"Softmax"` layer. – Shai May 23 '16 at 06:48
  • The last layer entry is ```layer { name: "prob" type: "Softmax" bottom: "loss3/classifier" top: "prob" } ``` – pogopaule May 23 '16 at 14:17
  • it is a bit odd. as usually `"Softmax"` layer should output values in range `[0..1]` that sum to one... can you look at the log and see what output layer you are actually getting? – Shai May 23 '16 at 14:21
  • Here is the [log output](https://gist.github.com/pogopaule/5a74d504c5d98b39c107b184f85808b3#file-classify-py-output) and the full [deploy.prototxt](https://gist.github.com/pogopaule/5a74d504c5d98b39c107b184f85808b3#file-deploy-prototxt) – pogopaule May 23 '16 at 16:29

1 Answers1

1

This issue led me to the solution.

As the log shows, the network produces three outputs. Classifier#classify only returns the first output. So e.g. by changing predictions = out[self.outputs[0]] to predictions = out[self.outputs[2]], I get the desired values.

pogopaule
  • 1,533
  • 2
  • 10
  • 17