0

i'm new to langchain and im trying to achieve PromptTemplate inside the ConversationalRetrievalChain. i figured that i should use combine_docs_chain_kwargs for this but i'm still getting an error.this is probably because of the input style. Please let me know if anyone familiar with this issue.

this is my code:


 # Define the system message template
system_template = """End every answer should end with " This is the according to 10th article"."""

# Create the chat prompt templates
messages = [
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{question}")
]
qa_prompt = ChatPromptTemplate.from_messages(messages)

qa = ConversationalRetrievalChain.from_llm(llm=ChatOpenAI(),retriever=vectorstore.as_retriever(), combine_docs_chain_kwargs={"prompt": qa_prompt})

result = qa({"question":"what is the job description ?"})

print("llm output",result)
             

but i'm getting this error

File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for StuffDocumentsChain
__root__
  document_variable_name context was not found in llm_chain input_variables: ['question'] (type=value_error)

I tried condense_question_prompt as well, but it is not giving an answer Im expecting.

Update : its working when i add "{context}" in the system template like this: """End every answer should end with " This is the according to 10th article".{context}"""

But im getting error when i add multiple variables. for example """ Answer the question in {number} of lines End every answer should end with " This is the according to 10th article".{context}"""

Niyas
  • 1
  • 1

0 Answers0