0

I am using Jetson AGX Xavier with Jetpack 4.2.1

I have not altered Tensor RT, UFF and graphsurgeon version. They are as it is.

I have retrained SSD Inception v2 model on custom 600x600 images.

Taken pretrained model from here. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

I have changed height and width to 600x600 in pipeline.config.

I am using sampleUffSSD sample containing in Tensor RT samples.

In config.py I replaced 300 by 600 in shape.

I generated frozen_graph.uff by command : python3 convert_to_uff.py frozen_inference_graph.pb -O NMS -p config.py

In file BatchStreamPPM.h:

I changed

static constexpr int INPUT_H = 600; // replaced 300 by 600
static constexpr int INPUT_W = 600; // replaced 300 by 600
mDims = nvinfer1::DimsNCHW{batchSize, 3, 600, 600}; // replaced 300 by 600

In file sampleUffSSD.cpp

I changed

parser->registerInput("Input", DimsCHW(3, 600,600), UffInputOrder::kNCHW); // replaced 300 by 600

cd sampleUffSSD

make clean ; make

I ran sample_uff_ssd I met below error:

&&&& RUNNING TensorRT.sample_uff_ssd # ./../../bin/sample_uff_ssd [I] ../../data/ssd/sample_ssd_relu6.uff [I] Begin parsing model... [I] End parsing model... [I] Begin building engine... sample_uff_ssd: nmsPlugin.cpp:139: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * numLocClasses * 4 == inputDims[param.inputOrder[0]].d[0]' failed. Aborted (core dumped)

I think the problem is with resolution.

How can I optimise model for custom resolution ?

It works fine with 300x300 resolution.

Mitesh Patel
  • 465
  • 2
  • 5
  • 14

0 Answers0