I'm using tensorflow's tf.image.non_max_suppression
in Faster-RCNN network. The tensorflow graph builds successfully and also runs without errors, but when running the graph, the output is empty tensors, with the dimension that depends on the input having size 0 (e.g., I get output tensors of shape (0,4)). So I used tfdbg
tool to see what's going on. I found that the tensor value is Uninitialized tensor
for the tf.image.non_max_suppression
op. All subsequent tensors depending on the output of the nms op also show as Unintialized tensor
.
I had trained this Faster-RCNN network and saved the weights using tf.train.Saver(..).save(..)
. When I trained the network, I was using a python numpy function to do NMS and it was plugged into the network using the op wrapper tf.py_func
, then after the model was trained, I decided to changed the python function to a tensorflow implementation.
tf.image.non_max_suppression
has no learnable parameters so I don't understand why when I now load the saved model and run the graph, the tensor is uninitialized. I also tried to do sess.run(tf.global_variables_initializer())
before loading the model with tf.train.Saver(..).restore(..)
but there's no change, still seeing Uninitialized tensor
in tfdbg
for the NMS and subsequent ops.
Any ideas about this behaviour and how it can be solved ?
This is how I use tf.image.non_max_suppression
:
keep = tf.image.non_max_suppression(proposals_yxyx, scores, tf.constant(cfg[cfg_key].RPN_POST_NMS_TOP_N), cfg[cfg_key].RPN_NMS_THRESH, name="Non-maximal-suppression")