Describe the problem
I have successfully trained my model on a custom dataset with 4 classes of size 480x640, with an xception65 encoder, using Deeplab. I am getting decent results on the validation set whenever I use the vis.py
script: EvalImageA_ckpt, EvalImageB_ckpt. However, I am not getting the same results on the same images when I freeze the model.
I froze the model using export_model.py
and successfully outputted a frozen_model.pb file. However, when I run inferences using this pb file, the outputs are always 0 (i.e. everything is classified as "background") on the same exact images I provided links to above. Everything is black!
I believe this to be an issue with how I am exporting or loading the model, and not necessarily with the model itself because the performance on the images is different between running the vis.py
script and my custom code for inference. Perhaps I am not loading the graph or initializing the variables correctly. Or perhaps I'm not saving the weights correctly in the first place. Any help would be greatly appreciated!
Source code
Below I provide my code for inference:
from deeplab.utils import get_dataset_colormap
from PIL import Image
import tensorflow as tf
import time
import matplotlib.pyplot as plt
import numpy as np
import cv2
import os
import glob
# tensorflow arguments
flags = tf.app.flags # flag object for setup
FLAGS = flags.FLAGS # object to access initialized flags
flags.DEFINE_string('frozen', None,
'The path/to/frozen.pb file.')
def _load_graph(frozen):
print('Loading model `deeplabv3_graph` into memory from',frozen)
with tf.gfile.GFile(frozen, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="",
op_dict=None,
producer_op_list=None
)
return graph
def _run_inferences(sess, image, title):
batch_seg_map = sess.run('SemanticPredictions:0',
feed_dict={'ImageTensor:0': [np.asarray(image)]})
semantic_prediction = get_dataset_colormap.label_to_color_image(batch_seg_map[0],
dataset=get_dataset_colormap.__PRDL3_V1).astype(np.uint8)
plt.imshow(semantic_prediction)
plt.axis('off')
plt.title(title)
plt.show()
def main(argv):
# initialize model
frozen = os.path.normpath(FLAGS.frozen)
assert os.path.isfile(frozen)
graph = _load_graph(frozen)
# open graph resource and begin inference in-loop
with tf.Session(graph=graph) as sess:
for img_path in glob.glob('*.png'):
img = Image.open(img_path).convert('RGB')
_run_inferences(sess, img, img_path)
if __name__ == '__main__':
flags.mark_flag_as_required('frozen')
tf.app.run() # call the main() function
And below is my code for exporting the model, using the provided export_model.py
script.
python export_model.py \
--logtostderr \
--atrous_rates=6 \
--atrous_rates=12 \
--atrous_rates=18 \
--output_stride=16 \
--checkpoint_path="/path/to/.../model.ckpt-32245" \
--export_path="/path/to/.../frozen_4_11_19.pb" \
--model_variant="xception_65" \
--num_classes=4 \
--crop_size=481 \
--crop_size=641 \
--inference_scales=1.0
System information
- What is the top-level directory of the model you are using: deeplab
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 Enterprise
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 1.12.0
- Bazel version (if compiling from source): N/A
- CUDA/cuDNN version: 9
- GPU model and memory: NVIDIA Quadro M4000, 8GB
- Exact command to reproduce: Does not apply