0

LangChain makes it straightforward to send output from one LLMChain object to the next using the SimpleSequentialChain function. This is a very simplified example:

chain = []
pt = PromptTemplate(input_variables=["_prior"], template="{_prior}") # Consume top-level input directly
chain.append( LLMChain(llm=llm, prompt=pt) )
pt = PromptTemplate(input_variables=["_prior"], template="Given {_prior}, how about those Mets?")
chain.append( LLMChain(llm=llm, prompt=pt) )

seq_chain = SimpleSequentialChain(chains=chain)

final_answer = seq_chain.run("This is the top level input and is substituted for _prior
in the first item of the chain")

This all works fine. BTW, nothing special about _prior. I could have used hotdog. It is simply named parameter substitution and _prior really indicates that the input is coming from the output of the previous component.

What I need to do is incorporate vector embeddings fetched from a private store as part of the chain. Separately, I have code that takes a prompt, converts it to a embeddings vector, then passes that as a query to the store and this too works fine. I now have a set of vectors, each 1536 long. I want to do something like this:

chain = []

#  Front-load the chain with our private info:
for v in list_of_vectors:
    chain.append( LLMChain(llm=llm, vector=v) )

#  Now append a regular text prompt:
pt = PromptTemplate(input_variables=["_prior"], template="Given {_prior}, how about those Mets?")
chain.append( LLMChain(llm=llm, prompt=pt) )

In other words, bypassing the prompt text-to-vectorization stage and using vectors directly. But I cannot seem to find this capability in the LLMChain object or other wizardry involving the base Chain class. Note: Posts are out there suggesting that upon successful vectorstore retrieval, the input encoder that was used to generate the text that was passed to the embedding function to get the vector that was stored in the first place should be re-run to produce the text again and this can be passed to LLMChain(prompt=that_rerun_text). But I am hoping there is a more elegant way.

Buzz Moschetti
  • 7,057
  • 3
  • 23
  • 33

0 Answers0