0

Similar to the issue of The trained model can be deployed on the other platform without dependency of sagemaker or aws service?.

I have trained a model on AWS SageMaker by using the built-in algorithm Semantic Segmentation. This trained model named as model.tar.gz is stored on S3. So I want to download this file from S3 and then use it to make inference on my local PC without using AWS SageMaker anymore. Since the built-in algorithm Semantic Segmentation is built using the MXNet Gluon framework and the Gluon CV toolkit, so I try to refer the documentation of mxnet and gluon-cv to make inference on local PC.

It's easy to download this file from S3, and then I unzip this file to get three files:

  1. hyperparams.json: includes the parameters for network architecture, data inputs, and training. Refer to Semantic Segmentation Hyperparameters.
  2. model_algo-1
  3. model_best.params

Both model_algo-1 and model_best.params are the trained models, and I think it's the output from net.save_parameters (Refer to Train the neural network). I can also load them with the function mxnet.ndarray.load.

Refer to Predict with a pre-trained model. I found there are two necessary things:

  1. Reconstruct the network for making inference.
  2. Load the trained parameters.

As for reconstructing the network for making inference, since I have used PSPNet from training, so I can use the class gluoncv.model_zoo.PSPNet to reconstruct the network. And I know how to use some services of AWS SageMaker, for example batch transform jobs, to make inference. I want to reproduce it on my local PC. If I use the class gluoncv.model_zoo.PSPNet to reconstruct the network, I can't make sure whether the parameters for this network are same those used on AWS SageMaker while making inference. Because I can't see the image 501404015308.dkr.ecr.ap-northeast-1.amazonaws.com/semantic-segmentation:latest in detail.

As for loading the trained parameters, I can use the load_parameters. But as for model_algo-1 and model_best.params, I don't know which one I should use.

Zhen Wang
  • 61
  • 1
  • 5

1 Answers1

0

The following code works well for me.

import mxnet as mx
from mxnet import image
from gluoncv.data.transforms.presets.segmentation import test_transform
import gluoncv

# use cpu
ctx = mx.cpu(0)

# load test image
img = image.imread('./img/IMG_4015.jpg')
img = test_transform(img, ctx)
img = img.astype('float32')

# reconstruct the PSP network model
model = gluoncv.model_zoo.PSPNet(2)

# load the trained model
model.load_parameters('./model/model_algo-1')

# make inference
output = model.predict(img)
predict = mx.nd.squeeze(mx.nd.argmax(output, 1)).asnumpy()
Zhen Wang
  • 61
  • 1
  • 5