3

I downloaded a template and the same is in this path: \home\wisehipoppotamus\LLAMA

Inside the LLAMA folder there are 4 folders referring to each model, which are the folders:

  • 7B
  • 13B
  • 30B
  • 65B

Plus 2 files:

  • tokenizer.model
  • tokenizer_checklist.chk

Here is a print of the directory:

LLAMA Directory

My python code to run the model looks like this:

import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('/home/wisehipoppotamus/LLAMA')
model = GPT2LMHeadModel.from_pretrained('/home/wisehipoppotamus/LLAMA/65B')


input_text = "What are penguins?"
encoded_input = tokenizer.encode(input_text, return_tensors='en')
output = model.generate(encoded_input, max_length=200, num_return_sequences=1)
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)

When I run this code I get these err

python3 ai_model_test.py
Traceback (most recent call last):
  File "/home/wisehipoppotamus/ai_model_test.py", line 4, in <module>
    tokenizer = GPT2Tokenizer.from_pretrained('/home/wisehipoppotamus/LLAMA')
  File "/home/wisehipoppotamus/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for '/home/wisehipoppotamus/LLAMA'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/wisehipoppotamus/LLAMA' is the correct path to a directory containing all relevant files for a GPT2Tokenizer tokenizer.

From what I saw the paths are correct for the tokenizer and for the specific model, respectively. I am not understanding this error. Can anybody help me? Thanks.

user3386109
  • 34,287
  • 7
  • 49
  • 68

0 Answers0