I have a PyTorch instance segmentation model that predicts the masks of images one by one. Is there any way to parallelize this task so that it can be completed faster. I have tried to use multiprocessing pool.apply_async()
to call the method that performs prediction passing the required arguments, but it throws me segmentation fault. Any help is much appreciated. Thanks.
Asked
Active
Viewed 699 times
2

planet_pluto
- 742
- 4
- 15
1 Answers
1
In general, you should be able to use torch.stack
to stack multiple images together into a batch and then feed that to your model. I can't say for certain without seeing your model, though. (ie. if your model was built to explicitly handle one image at a time, this won't work)
model = ... # Load your model
input1 = torch.randn(3, 32, 32) # An RGB input image
input2 = torch.randn(3, 32, 32) # An RGB input image
model_input = torch.stack((input1, input2))
print(model_input.shape) # (2, 3, 32, 32)
result = model(model_input)
print(result.shape) # (2, 1, 32, 32) 2 output masks, one for each image
If you've trained the model yourself this will look familiar as it's essentially how we feed batches of images to the network during training.
You can stack more than two together, it will typically be limited by the amount of GPU memory you have available to you.

JoshVarty
- 9,066
- 4
- 52
- 80