0

I need to use LLMChain with locally stored model. I have below code.

llm_chain = LLMChain(prompt=prompt, llm = HuggingFaceHub(repo_id="google/flan-t5-large", model_kwargs={..some params}))

Instead of repo_id , I need to provide local path. Please advice , how can I update that.

Thank you

Khushi
  • 325
  • 1
  • 11
  • 32

1 Answers1

-1

you can build you chain as you would do in Hugginface with local_files_only=True here is an exemple:

tokenizer = AutoTokenizer.from_pretrained(your_tokenizer)
model = AutoModelForCausalLM.from_pretrained(
    your_model_PATH,
    device_map=device_map,
    torch_dtype=torch.float16,
    max_memory=max_mem,
    quantization_config=quantization_config,
    local_files_only=True
)

Then you build the pipeline:

pipe = pipeline(
    "text-generation",
    model              = model, 
    tokenizer          = tokenizer, 
    max_length         = 512,
    temperature        =   0.7,
    top_p              =   0.95,
    repetition_penalty =   1.15
)
local_llm = HuggingFacePipeline(pipeline=pipe)

Now you can feed the pipeline to Langchain:

llm_chain = LLMChain(prompt=prompt, llm=local_llm)
kolergy
  • 2,313
  • 2
  • 13
  • 14