0

I'm trying to run a chain in LangChain with memory and multiple inputs. The closest error I could find was was posted here, but in that one, they are passing only one input.

Here is the setup:

from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory

llm = OpenAI(
    model="text-davinci-003",
    openai_api_key=environment_values["OPEN_AI_KEY"], # Used dotenv to store API key
    temperature=0.9,
    client="",
)

memory = ConversationBufferMemory(memory_key="chat_history")

prompt = PromptTemplate(
    input_variables=[
        "text_one",
        "text_two",
        "chat_history"
    ],
    template=(
        """You are an AI talking to a huamn. Here is the chat
        history so far:

        {chat_history}

        Here is some more text:

        {text_one}

        and here is a even more text:

        {text_two}
        """
    )
)

chain = LLMChain(
    llm=llm,
    prompt=prompt,
    memory=memory,
    verbose=False
)

When I run

output = chain.predict(
    text_one="Hello",
    text_two="World"
)

I get ValueError: One input key expected got ['text_one', 'text_two']

I've looked at this stackoverflow post, which suggests to try:

output = chain(
    inputs={
        "text_one" : "Hello",
        "text_two" : "World"
    }
)

which gives the exact same error. In the spirit of trying different things, I've also tried:

output = chain.predict( # Also tried .run() here
    inputs={
        "text_one" : "Hello",
        "text_two" : "World"
    }
)

which gives Missing some input keys: {'text_one', 'text_two'}.

I've also looked at this issue on the langchain GitHub, which suggests to do pass the llm into memory, i.e.

# Everything the same except...
memory = ConversationBufferMemory(llm=llm, memory_key="chat_history") # Note the llm here

and I still get the same error. If someone knows a way around this error, please let me know. Thank-you.

pvasudev16
  • 83
  • 1
  • 1
  • 5

1 Answers1

0

While drafting this question, I came across the answer.

When defining the memory variable, pass an input_key="human_input" and make sure each prompt has a human_input defined.

memory=ConversationBufferMemory(
    memory_key="chat_history",
    input_key="human_input"
)

Then, in each prompt, make sure there is a human_input input.

prompt = PromptTemplate(
    input_variables=[
        "text_one",
        "text_two",
        "chat_history",
        "human_input", # Even if it's blank

    ],
    template=(
        """You are an AI talking to a huamn. Here is the chat
        history so far:

        {chat_history}

        Here is some more text:

        {text_one}

        and here is a even more text:

        {text_two}

        {human_input}
        """
    )
)

Then, build your chain:

chain = LLMChain(
    llm=llm,
    prompt=prompt,
    memory=memory, # Contains the input_key
    verbose=False
)

And then run it as:

output = chain.predict(
    human_input="", # or whatever you want
    text_one="Hello",
    text_two="World"
)
print(output)
# On my machine, it outputs: '\nAI: Hi there! How can I help you?'
pvasudev16
  • 83
  • 1
  • 1
  • 5