1

I'm doing inference for semantic segmentation using pretrained (on cityscapes dataset) DeepLab models based on xception_65 and xception_71 architectures. I've observed that:

  1. xception_65 is better in creating segmentation masks compared to xception_71.
  2. xception_71 is significantly faster than xception_65.

As xception_71 has 71 layers, which is more than the number of layers in xception_65 (65 layers), shouldn't it has higher inference time OR am I wrong somewhere?

(the number of blocks in xception_65 are fewer compared to that in xception_71)

You can check the code to reproduce the result at colab:segmentation_deeplab.ipynb.

%%time

print('Model:', MODEL_NAME)
seg_map = MODEL.run(original_im)

xception_65

Model: xception65_cityscapes_trainfine
CPU times: user 1.08 s, sys: 815 ms, total: 1.89 s
Wall time: 1.71 s

xception_71

Model: xception71_cityscapes_trainfine
CPU times: user 146 ms, sys: 28 ms, total: 174 ms
Wall time: 649 ms

kHarshit
  • 11,362
  • 10
  • 52
  • 71

1 Answers1

3

From your notebook:

_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'
_MODEL_URLS = {
    'xception65_cityscapes_trainfine':
        'deeplabv3_cityscapes_train_2018_02_06.tar.gz',
    'xception71_cityscapes_trainfine':
        'deeplab_cityscapes_xception71_trainfine_2018_09_08.tar.gz',
}

Note how the exception65 links to a deeplabv3 tag.gz but xeception71 links to a vanilla deeplab tar.gz

Deeplab is a whole series of models. Your exception65 is a smaller backbone, under a newer more powerful segmenter, that's why it performs better.


To confirm the contents of the models try this (from 1, 2):

%load_ext tensorboard

def graph_to_tensorboard(graph, out_dir):
  with tf.Session():
    train_writer = tf.summary.FileWriter(out_dir)
    train_writer.add_graph(graph)


graph_to_tensorboard(MODEL.graph, out_dir="logs")

%tensorboard --logdir logs
mdaoust
  • 6,242
  • 3
  • 28
  • 29