1

I am new at LLM. I need to run a LLM on a local server and need to download different model to experiment. I am trying to follow this guide from HuggingFace https://huggingface.co/docs/transformers/installation#offline-mode

To begin with, I picked "CalderaAI/30B-Lazarus". I ran the script

    from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
    model_name="CalderaAI/30B-Lazarus"
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

I got this message

ValueError: Unrecognized configuration class <class 'transformers.models.llama.configuration_llama.LlamaConfig'> for this kind of AutoModel: AutoModelForSeq2SeqLM. Model type should be one of BartConfig, BigBirdPegasusConfig, BlenderbotConfig, BlenderbotSmallConfig, EncoderDecoderConfig, FSMTConfig, GPTSanJapaneseConfig, LEDConfig, LongT5Config, M2M100Config, MarianConfig, MBartConfig, MT5Config, MvpConfig, NllbMoeConfig, PegasusConfig, PegasusXConfig, PLBartConfig, ProphetNetConfig, SwitchTransformersConfig, T5Config, XLMProphetNetConfig.

Is it because AutoModelForSeq2SeqLM is not compatible with "CalderaAI/30B-Lazarus"? If so, how do determine what is compatible for different model.

Thanks in advance!

zoomraider
  • 117
  • 1
  • 9

1 Answers1

2

Try using this:

 model = AutoModelForCausalLM.from_pretrained("CalderaAI/30B-Lazarus")
Harshad Patil
  • 261
  • 2
  • 8