I am trying to create a SQL query LLM. Does anyone know how to use HuggingFacePipeline.from_pretrained to load a locally stored LLM model. The from_pretrained is not working with HuggingFace, as in the method does not exist.
--------- Adding Information --------
I have used LaMini-T5-738M as the base model to be run locally. The LLM model files is downloaded locally. As such huggingface does not seem to have a loading function for local LLM Models. So I used a pipeline object to load the values as per code below. My intention is to connect it to a DB and generate queries but that does not seem to be working right.
I am uncertain if this is due to the limitaiton of the model used. If so there are no references online to a model which supports anything beyond chat and QnA. Below code -
db_uri = f"type+psycopg2://{username}:{password}@{host}:{port}/{dbName}"
db = SQLDatabase.from_uri(db_uri)
checkpoint = "LaMini-T5-738M"
modelSeq = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
pipe = pipeline("text2text-generation",
model=modelSeq,
tokenizer=tokenizer)
local_llm = HuggingFacePipeline(pipeline=pipe)
db_chain = SQLDatabaseChain(llm=local_llm, database=db, verbose=True)
toolkit = SQLDatabaseToolkit(db=db,llm=local_llm)
agent_executor = create_sql_agent(toolkit=toolkit,llm=local_llm)
agent_executor.run(<query>) #GIVES GARBAGE OR UNINTELLIGIBLE OUTPUT