I have installed triton inference server with docker,
docker run --gpus=1 --rm -p8000:8000 -p8001:8001 -p8002:8002 -v /mnt/data/nabil/triton_server/models:/models nvcr.io/nvidia/tritonserver:22.08-py3 tritonserver --model-repository=/models
I have also created the torchscript model from my pytorch model, using
from model_ecapatdnn import ECAPAModel
import soundfile as sf
import torch
model_1 = ECAPAModel.ECAPAModel(lr = 0.001, lr_decay = 0.97, C = 1024, n_class = 18505, m = 0.2, s = 30, test_step = 3, gpu = -1)
model_1.load_parameters("/ecapatdnn/model.pt")
model = model_1.speaker_encoder
# Switch the model to eval model
model.eval()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 48000)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
# Save the TorchScript model
traced_script_module.save("traced_ecapatdnn_bangasianeng.pt")
Now, as you can see, my model takes a tensor with shape (BxN)
, where B is the batch size.
How do I write the config.pbtxt
for this model?