1

I am trying to load a local copy of the coref-spanbert model using Predictor.from_path but it starts downloading the model again into cache/huggingface. Can anyone help me to fix this.

>>> from allennlp.predictors import Predictor
>>> coref_model = Predictor.from_path('coref-spanbert-large-2021.03.10.tar.gz')
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 414/414 [00:00<00:00, 436kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 213k/213k [00:00<00:00, 239kB/s]
Downloading:  34%|███████████████████████████████████████████████████
Irshad Bhat
  • 8,479
  • 1
  • 26
  • 36
  • I believe it's downloading the pretrained BERT model weights. You can avoid that by setting the `load_weights` parameter of the `PretrainedTransformerEmbedder` to false: https://github.com/allenai/allennlp/blob/ebd6b5bae5693a7c858ccd4139e2a8b045475113/allennlp/modules/token_embedders/pretrained_transformer_embedder.py#L88. For example, `Predictor.from_path("...", overrides={"model.text_field_embedder.token_embedder.tokens.load_weights": False})` – petew Oct 22 '21 at 22:22
  • hi @petew tried the above configuration , still it downloads the models – user2478236 Oct 28 '21 at 08:23
  • Hmm could you post the logs? You might not have enough room here, but you could open a discussion on GitHub instead and tag me: https://github.com/allenai/allennlp/discussions – petew Oct 29 '21 at 17:09

0 Answers0