3

I'm trying to build a chatbot that can chat about pdfs, and I got it working with memory using ConversationBufferMemory and ConversationalRetrievalChain like in this example. https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html

Now I'm trying to give the AI some special instructions to talk like a pirate (just for testing to see if it is receiving the instructions). I think this is meant to be a SystemMessage, or something with a prompt template?

I've tried everything I have found, but all the examples in the documentation are for ConversationChain and I end up having problems with. So far the only thing that hasn't had any errors is this

template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
PROMPT = PromptTemplate(
    input_variables=["chat_history", "question"], template=template
)
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer')
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), PROMPT, memory=memory, return_source_documents=True)

It still doesn't have any effect on the results, so I don't know if it is doing anything at all. I also think it's the wrong approach, and i should be using SystemMessages (maybe on the memory, not the qa), but nothing I try from the documentation works and I'm not sure what to do.

Hudson Etkin
  • 127
  • 2
  • 8

1 Answers1

2

You can't pass PROMPT directly as a param on ConversationalRetrievalChain.from_llm(). Try using the combine_docs_chain_kwargs param to pass your PROMPT. See the below example with ref to your provided sample code:

template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""

PROMPT = PromptTemplate(
    input_variables=["chat_history", "question"], 
    template=template
)

memory = ConversationBufferMemory(
    memory_key='chat_history', 
    return_messages=True, 
    output_key='answer'
)

qa = ConversationalRetrievalChain.from_llm(
    llm=OpenAI(temperature=0),
    retriever=vectorstore.as_retriever(),
    memory=memory,
    return_source_documents=True,
    combine_docs_chain_kwargs={"prompt": PROMPT}
)

Then get the result:

result = qa({"question": query})
  • looks like you mentioned a similar issue here: https://stackoverflow.com/questions/76240871/how-do-i-add-memory-to-retrievalqa-from-chain-type-or-how-do-i-add-a-custom-pr Did you figure out the sending your custom `PROMPT` through `ConversationalRetrievalChain.from_llm()`? – Rijoanul Hasan Shanto May 17 '23 at 05:48