4

Getting the error while trying to run a langchain code.

ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents'].
Traceback:
File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
File "D:\Python Projects\POC\Radium\Ana\app.py", line 49, in <module>
    answer = question_chain.run(formatted_prompt)
File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\langchain\chains\base.py", line 106, in run
    f"`run` not supported when there is not exactly one input key, got ['question', 'documents']."

My code is as follows.

import os
from apikey import apikey

import streamlit as st
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
#from langchain.memory import ConversationBufferMemory
from docx import Document

os.environ['OPENAI_API_KEY'] = apikey

# App framework
st.title(' Colab Ana Answering Bot..')
prompt = st.text_input('Plug in your question here')


# Upload multiple documents
uploaded_files = st.file_uploader("Choose your documents (docx files)", accept_multiple_files=True, type=['docx'])
document_text = ""

# Read and combine Word documents
def read_docx(file):
    doc = Document(file)
    full_text = []
    for paragraph in doc.paragraphs:
        full_text.append(paragraph.text)
    return '\n'.join(full_text)

for file in uploaded_files:
    document_text += read_docx(file) + "\n\n"

with st.expander('Contextual Prompt'):
    st.write(document_text)

# Prompt template
question_template = PromptTemplate(
    input_variables=['question', 'documents'],
    template='Given the following documents: {documents}. Answer the question: {question}'
)

# Llms
llm = OpenAI(temperature=0.9)
question_chain = LLMChain(llm=llm, prompt=question_template, verbose=True, output_key='answer')

# Show answer if there's a prompt and documents are uploaded
if prompt and document_text:
    formatted_prompt = question_template.format(question=prompt, documents=document_text)
    answer = question_chain.run(formatted_prompt)
    st.write(answer['answer'])

I have gone through the documentations and even then I am getting the same error. I have already seen demos where multiple prompts are being taken by langchain.

Daremitsu
  • 545
  • 2
  • 8
  • 24
  • There is some inconsistency in the error message. It says: File "D:\Python Projects\POC\Radium\Ana\app.py", line 49, in answer = question_chain.run(input_variables) but in the line 49 the parameter in the method run of the code you posted isn't input_variables, it is formatted_prompt instead. – ChalsBP May 08 '23 at 10:31
  • Extremely sorry. Allow me to correct this. – Daremitsu May 08 '23 at 11:15

2 Answers2

3

For a prompt with multiple inputs, use predict() instead of run(), or just call the chain directly. (Note: Requires Python 3.8+)

prompt_template = "Tell me a {adjective} joke and make it include a {profession}"
llm_chain = LLMChain(
    llm=OpenAI(temperature=0.5),
    prompt=PromptTemplate.from_template(prompt_template)
)

# Option 1
llm_chain(inputs={"adjective": "corny", "profession": "plumber"})

# Option 2
llm_chain.predict(adjective="corny", profession="plumber")

Also note that you only need to assign the PromptTemplate at the moment you're instantiating the LLMChain - after that you're just passing in the template variables - in your case, documents and question (instead of passing in the formatted template, as you have currently).

andrew_reece
  • 20,390
  • 3
  • 33
  • 58
  • Hello, followed your solution for this. The error unfortunately was coming. Then I used a different version of python 3.10. Previously was using python 3.7. Seems to be LangChain needs to be run on python version > 3.8. Have they provided this on any documentation? – Daremitsu May 11 '23 at 07:56
  • Ah, that would be part of the problem for sure. There isn't anything in the main documentation I can see about Python versioning. Does this solution work for you with the upgraded Python? – andrew_reece May 11 '23 at 14:27
  • Yup. Only after changing the Python version it is working. Could you edit your comment and add the Python specific version also i.e., > 3.8 and I can accept the solution? – Daremitsu May 12 '23 at 05:34
  • When using ConversationBufferMemory as a memory in the chain, I get the error raise ValueError(f"One input key expected got {prompt_input_keys}"). It will only allow one input key, and not more in the PromptTemplate. – Adriaan Aug 03 '23 at 21:26
  • @Adriaan, I have the same error. Using Python 3.11. Have you found a solution using memory? – pvasudev16 Aug 19 '23 at 21:51
  • @pvasudev16 Until the bug is fixed, I'm just storing my chat history in a list and then use a different LLM instance to summarize it within the given limit. I then pass this summary to the prompt so that it's taken into account by the main conversation LLM / chain. – Adriaan Aug 25 '23 at 08:10
  • @Adriaan I found a solution, which worked for me, and I logged it [here](https://stackoverflow.com/questions/76941870/valueerror-one-input-key-expected-got-text-one-text-two-in-langchain-wit). Basically, in the `ConversationBufferMemory`, you need to give an `input_key`, which has to be among the `input_variables` to every prompt. Hope it helps. – pvasudev16 Aug 26 '23 at 19:31
0

I got the same error while on python 3.7.1 but when I upgraded my python to 3.10 and langchain to latest version I could get rid of that error. I noticed this since on colab it was running fine but locally it wasn't.

peebee
  • 1
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community May 12 '23 at 06:14