I have tried this but it's not working for me. I am using this Git repo. I am building a desktop app and don't want users to download model. I want to ship models with build. I know transformers library looks for models in cache/torch/transformers
. If it's not there then download it. I also know you can pass cache_dir
parameter in pre_trained
.
I am trying this.
cache = os.path.join(os.path.abspath(os.getcwd()), 'Transformation/Annotators/New Sentiment Analysis/transformers')
os.environ['TRANSFORMERS_CACHE'] = cache
if args.model_name_or_path is None:
args.model_name_or_path = 'barissayil/bert-sentiment-analysis-sst'
#Configuration for the desired transformer model
config = AutoConfig.from_pretrained(args.model_name_or_path, cache_dir=cache)
I have tried the solution in above mentioned question and tried cache_dir also. The transformer folder is in same directory with analyze.py. The whole repo and transformer folder is in New Sentiment Analysis directory.