1

I trained a TF/Keras model (UNet architecture) with a Tesla K40. When I use it for inference with a Jetson AGX Xavier (Jetpack 4.4.1), I get very different results if the batch size is larger than 3. If not set, batch_size in model.predict() method is by default 32, and I only get the correct results if I pass just 3 inputs or less or if I pass the entire collection of my input data but specifying batch_size=3 (or less) inside the model.predict() method.

Here's the code:

import os
import numpy as np
from tensorflow import keras as K
import cdUtils
import cdModels
from libtiff import TIFF

img_size = 128
classes = 1
channels = 13
model_dir = '../models/'

model_name = 'EF_bce'

model = K.models.load_model(model_dir + model_name)    

model.summary()

dataset_dir = '../imgs_pisa/'
img_pre = 'pisa_pre/'
img_post = 'pisa_post/'
cm_name = 'pisa-cm_' + model_name

res_dir = '../res_pisa/'
os.makedirs(res_dir, exist_ok=True)

raster_pre = cdUtils.build_raster(dataset_dir + img_pre)
raster_post = cdUtils.build_raster(dataset_dir + img_post)
raster = np.concatenate((raster_pre,raster_post), axis=2)
padded_raster = cdUtils.pad(raster, img_size)
test_image = cdUtils.crop(padded_raster, img_size, img_size)

# Create inputs for the Neural Network
inputs = np.asarray(test_image, dtype='float32')
inputs_1 = inputs[:,:,:,:channels]
inputs_2 = inputs[:,:,:,channels:]
inputs = [inputs_1, inputs_2]

# Perform inference
results = model.predict(inputs)
print('Results: ', results)

print('Inference done!')

I checked that the pre-procesing functions (not included in this snippet) work properly and inputs always match on each device. Could be this a memory issue, even though I get no error at runtime?

Thanks.

TonyDP03
  • 11
  • 2

0 Answers0