2

I developed a script that worked just fine, it was as follows:

def get_response_from_query(db, query, k=4):

    docs = db.similarity_search(query, k=k)
    docs_page_content = " ".join([d.page_content for d in docs])

    chat = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.2)

    # Template to use for the system message prompt
    template = """
      this is a custom prompt template
        """

    system_message_prompt = SystemMessagePromptTemplate.from_template(template)

    # Human question prompt
    human_template = "Answer the following question: {question}"
    human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

    chat_prompt = ChatPromptTemplate.from_messages(
        [system_message_prompt, human_message_prompt]
    )

    chain = LLMChain(llm=chat, prompt=chat_prompt)

    response = chain.run(question=query, docs=docs_page_content)
    response = response.replace("\n", "")
    return response, docs

This worked very well, until I tried to use it in a new app with Streamlit.

First, the two first lines of code that perform a similatiry search breaks the code with this error:

Removing these lines, the rest of the code simply returns nothing to the interface.

I tried using a different method, as follows:

def get_conversation_chain(vectorstore):
    llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.2)

    memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)
    conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm,retriever=vectorstore.as_retriever(),memory=memory)
    return conversation_chain

This works good, until I try to add a 4th parameter to ConversationalRetrievalChain, that is combine_docs_chain_kwargs={"prompt": prompt}. Using code here. Here, I build a prompt the same way I would in my first code, but I keep receiving errors that placeholder {docs}, or {user_question} are missing context:

ValidationError: 1 validation error for StuffDocumentsChain root document_variable_name context was not found in llm_chain input_variables: [] (type=value_error)

As similarity search fails, I cannot pass anything to {docs}, even removing it does not work. Where does {context} come from?

What am I missing here? What method should I use, ConversationalRetrievalChain or LLMChain? Why similarity search doesn't work?

My only need is to tell the LLM to not answer when it's unsure, and for that I need to sent a custom template/prompt with the user prompt.

I'm new to AI, please help.

As above, I tried two methods

0 Answers0