2

I try to use the below code:

from transformers import AutoTokenizer, AutoModel
t = "ProsusAI/finbert"
tokenizer = AutoTokenizer.from_pretrained(t)
model = AutoModel.from_pretrained(t)

The error: I think this error is due to the old version of transformers not having such pre-trained model. I checked and its confirmed.

/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    380                 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
    381             )
--> 382             raise EnvironmentError(msg)
    383 
    384         except json.JSONDecodeError:

OSError: Can't load config for 'ProsusAI/finbert'. Make sure that:

- 'ProsusAI/finbert' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'ProsusAI/finbert' is the correct path to a directory containing a config.json file

My current versions:

  1. python 3.7
  2. transformers 3.4.0

I understand that my transformers version is old but that is the only version that is compatible with python 3.7. Also, the reason why I cant upgrade it to 3.9 is because I am using the below multimodal-transformers which only support up to 3.7.

Reasons:

  1. https://multimodal-toolkit.readthedocs.io/en/latest/ <- this only support up to 3.7
  2. Transformers only up to 3.4.0 supported by python 3.7.
  3. I need to use multimodal-transformers because it is easy to do text classification with tabular data. My dataset has text and category columns so I wish to use both, so this is the easiest practice I found. (If you have any suggestion, please do share with me thank you. )

My question is, is there a way to use the latest pre-trained model despite having the old tranformers?

Learner91
  • 103
  • 6
  • There are several ways to do that, but at first, I would check if it is true that the multimodal-toolkit library only supports 3.7. Sometimes developers just put that constraint to say: "I tested and maintain the functionality with 3.7. I have not tested it with higher versions but maybe it works too". In case it doesn't work, I would probably extract the relevant code from transformers or the multimodal-toolkit library. – cronoik Sep 08 '22 at 09:10
  • Thank you for your response. Actually, I try to install it with python 3.8 and 3.9 and both failed to install multimodal. Since most of the colab used 3.7 and also the pip webpage shows that it support 3.7 so I attempted 3.7 instead and it installed succesfully. – Learner91 Sep 08 '22 at 13:40
  • The next step you can try is installing transformers 4.X. That should work. I have used it with python 3.7 – cronoik Sep 08 '22 at 19:15
  • Yes but it wont work with multimodal-transformers. I try to import it when updated to 4.X but failed. – Learner91 Sep 09 '22 at 03:17
  • You installed both in colab with python 3.7 and receive an error message? Can you please add a code snippet that causes the error? I will have a look. – cronoik Sep 09 '22 at 07:21
  • NVM. There is a PR already that handles transformers > 4.16. Just install from this branch: https://github.com/georgian-io/Multimodal-Toolkit/pull/15 – cronoik Sep 09 '22 at 07:25

0 Answers0