Questions tagged [torchscript]

Questions about Torch Script. The tool for serialization of Pytorch optimizable models into a Python independent format.

Torch Script is a way to create serializable and optimizable models from PyTorch code. Any code written in Torch Script can be saved from your Python process and loaded in a process where there is no Python dependency.

Torch Script provides tools to transition a model from pure Python to a Torch Script program that can be run in a standalone C++ program.

Read more about Torch Script here

94 questions
7
votes
1 answer

Torch JIT Trace = TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect

I am following this tutorial: https://huggingface.co/transformers/torchscript.html to create a trace of my custom BERT model, however when running the exact same dummy_input I receive an error: TracerWarning: Converting a tensor to a Python boolean…
6
votes
0 answers

How to access module attributes such as the stride of convolutions from jit::script::module

I am currently writing a C++ program which needs to do some analyses of the structure of a CNN model in torchScript format. I am using the C++ torch library the way it is shown on torch.org, loading in the model like so: #include…
Mercury
  • 594
  • 1
  • 12
  • 28
5
votes
0 answers

"RuntimeError: PyTorch convert function for op 'pythonop' not implemented" AND "Python builtin is currently not supported in Torchscript"

Newbie question. I've been trying to convert this PyTorch model into CoreML model. I've followed the guide here but couldn't make it work. I tried tracing and scripting but faced errors which hint that there might be an operation not supported in…
Artem Kirillov
  • 1,132
  • 10
  • 25
5
votes
1 answer

ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds

Trying to convert a question-generation t5 model to torchscript model, while doing that Running into this error ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds here's the code that I ran on colab. !pip install -U…
kiranr
  • 2,087
  • 14
  • 32
5
votes
2 answers

What are the differences between torch.jit.trace and torch.jit.script in torchscript?

Torchscript provides torch.jit.trace and torch.jit.script to convert pytorch code from eager mode to script model. From the documentation, I can understand torch.jit.trace cannot handle control flows and other data structures present in the python.…
4
votes
1 answer

How to convert torchscript model in PyTorch to ordinary nn.Module?

I am loading the torchscript model in the following way: model = torch.jit.load("model.pt").to(device) The children modules of this model are identified as RecursiveScriptModule. I would like to finetune the uploaded weights and in order to make it…
4
votes
2 answers

TorchScript requires source access in order to carry out compilation for collections.deque

I'm trying to convert PyTorch FOMM model to TorchScript. As soon as I started to annotate some classes with @torch.jit.script I've got an error: OSError: Can't get source for . TorchScript requires source access in order…
serg_zhd
  • 1,023
  • 14
  • 26
4
votes
1 answer

Get value from c10::Dict in Pytorch C++

I'm using a TorchScript Model on Pytorch C++ Frontend. The model in Python returns an output dict as Dict[str, List[torch.Tensor]]. When I use it in C++, it returns a c10::Dict. What is the equivalent of this Python…
lamhoangtung
  • 834
  • 2
  • 10
  • 22
4
votes
1 answer

how to use custom python object in torchscript

I am ready to convert a pytorch module to ScriptModule and then load it in c++,but I am blocked by this error This attribute exists on the Python module, but we failed to convert Python type: 'Vocab' to a TorchScript type, the Vocab is a python…
Chuanhua Yang
  • 113
  • 1
  • 8
4
votes
2 answers

Torchscripting a module with _ConvNd in forward

I am using PyTorch 1.4 and need to export a model with convolutions inside a loop in forward: class MyCell(torch.nn.Module): def __init__(self): super(MyCell, self).__init__() def forward(self, x): for i in range(5): …
Ziyuan
  • 4,215
  • 6
  • 48
  • 77
4
votes
1 answer

How to enable Dict/OrderedDict/NamedTuple support in pytorch 1.1.0 JIT compiler?

From the release highlight of pytorch 1.1.0. It appears that the latest JIT compiler now supports Dict type. (Source: https://jaxenter.com/pytorch-1-1-158332.html) Dictionary and list support in TorchScript: Lists and dictionary types behave like…
tribbloid
  • 4,026
  • 14
  • 64
  • 103
3
votes
1 answer

Torchscript call other function rather than forward

When I compile my torch model to torchscript I can make use of the function forward by just calling the torchscript model object model(). But when I want to use another function created on the model I cant call the function. I try to do…
eljiwo
  • 687
  • 1
  • 8
  • 29
3
votes
2 answers

How to get values return by Tuple Object in Maskcrnn libtorch

I’m new in C++ and libtorch, I try load model by torchscript and execute inference, the codes like below: torch::jit::script::Module module; try { module =…
namngduc
  • 31
  • 4
3
votes
2 answers

Torchscript vs TensorRT for real time inference

I have trained an object detection model to be used in production for real-time applications. I have the following two options. Can anyone suggest what is the best way to run inference on Jetson Xavier for best performance? Any other suggestions are…
Akshay Kumar
  • 277
  • 2
  • 11
3
votes
1 answer

If I Trace a PyTorch Network on Cuda, can I use it on CPU?

I traced my Neural Network using torch.jit.trace on a CUDA-compatible GPU server. When I reloaded that Trace on the same server, I could reload it and use it fine. Now, when I downloaded it onto my laptop (for quick testing), when I try to load the…
JCunn
  • 53
  • 1
  • 7
1
2 3 4 5 6 7