I'm trying to run a chain in LangChain with memory and multiple inputs. The closest error I could find was was posted here, but in that one, they are passing only one input.
Here is the setup:
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
llm = OpenAI(
model="text-davinci-003",
openai_api_key=environment_values["OPEN_AI_KEY"], # Used dotenv to store API key
temperature=0.9,
client="",
)
memory = ConversationBufferMemory(memory_key="chat_history")
prompt = PromptTemplate(
input_variables=[
"text_one",
"text_two",
"chat_history"
],
template=(
"""You are an AI talking to a huamn. Here is the chat
history so far:
{chat_history}
Here is some more text:
{text_one}
and here is a even more text:
{text_two}
"""
)
)
chain = LLMChain(
llm=llm,
prompt=prompt,
memory=memory,
verbose=False
)
When I run
output = chain.predict(
text_one="Hello",
text_two="World"
)
I get ValueError: One input key expected got ['text_one', 'text_two']
I've looked at this stackoverflow post, which suggests to try:
output = chain(
inputs={
"text_one" : "Hello",
"text_two" : "World"
}
)
which gives the exact same error. In the spirit of trying different things, I've also tried:
output = chain.predict( # Also tried .run() here
inputs={
"text_one" : "Hello",
"text_two" : "World"
}
)
which gives Missing some input keys: {'text_one', 'text_two'}
.
I've also looked at this issue on the langchain GitHub, which suggests to do pass the llm
into memory, i.e.
# Everything the same except...
memory = ConversationBufferMemory(llm=llm, memory_key="chat_history") # Note the llm here
and I still get the same error. If someone knows a way around this error, please let me know. Thank-you.