1

I found interesting model - question generator, but can't run it. I got an error:

Traceback (most recent call last):
  File "qg.py", line 5, in <module>
    model = AutoModelWithLMHead.from_pretrained("/home/user/ml-experiments/gamesgen/t5-base-finetuned-question-generation-ap/")
  File "/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_auto.py", line 806, in from_pretrained
    return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
  File "/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py", line 798, in from_pretrained
    import torch_xla.core.xla_model as xm
ModuleNotFoundError: No module named 'torch_xla'

I briefly googled and found that "torch_xla" is a something that is used to train pytorch model on TPU. But I would like to run it localy on cpu (for inference, of course) and got this error when pytorch tried to load tpu-bound tensors. How can I fix it?

this is model, which I tried: https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap

exelents
  • 83
  • 1
  • 5
  • 1
    This is a [bug](https://github.com/huggingface/transformers/pull/5636) which is already fixed but not in a released version. Pull transformers from github or apply the patch to your version. – cronoik Aug 15 '20 at 03:45

1 Answers1

1

As @cronoik suggested, I have installed transformers library form github. I clonned latest version, and executed python3 setup.py install in it's directory. This bug was fixed, but fix still not released in python's packets repository.

exelents
  • 83
  • 1
  • 5