On a Jetson TX2 I am running:
- Linux4Tegra R32.2.1
- UFF Version 0.6.3
- TensorRT 5.1.6.1
- Cuda 10
- Python 3.6.8
I get this error message:
[TensorRT] ERROR: UffParser: Validator error: sequential/batch_normalization_1/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
From this code:
output_nodes = [args.output_node_names]
input_node = args.input_node_name
frozen_graph_pb = args.frozen_graph_pb
uff_model = uff.from_tensorflow(frozen_graph_pb, output_nodes) . #Successfully creates uff model
network = builder.create_network()
G_LOGGER = trt.Logger(trt.Logger.INFO)
builder = trt.Builder(G_LOGGER)
builder.max_batch_size = 10
builder.max_workspace_size = 1 << 30
data_type = trt.DataType.FLOAT
parser = trt.UffParser()
input_verified =parser.register_input(input_node, (1,234,234,3)) #returns true
output_verified = parser.register_output(output_nodes[0]) #returns true
buffer_verified = parser.parse_buffer(uff_model, network, data_type) #returns false
The uff model was created successfully.
The parser successfully registered the inputs and outputs.
Parsing the buffer fails with the error above.
Does anyone know if FusedBatchNormV3 is truly not supported in tensorRT and if not is there an existing plugin that I can pull using the graph surgeon module?