2

My goal is to make a ChatBot able to:

  1. have memory
  2. take as input some documents
  3. rely ONLY on the documents given and say "I don't know" when the information is not in the input documents.

I found this tutorial, super useful: https://github.com/jerryjliu/llama_index/blob/main/examples/chatbot/Chatbot_SEC.ipynb

It works perfectly, BUT I don't understand how to apply point 3 (rely only on input documents).

Do you think that with LangChain is easier? Thanks a lot and have a nice day! Carlo

Noticing that

create_llama_chat_agent

relies on langchain/agents/conversational/prompt.py file to build the prompt, I manipulated that PREFIX text to say something like "please rely only on your context" but it is not effective.

Carlo
  • 31
  • 3

1 Answers1

1

After a lot of tests, it was easier than expected. Using this: https://blog.langchain.dev/tutorial-chatgpt-over-your-data/

I simply modified the QA_PROMPT saying to answer questions only when related to the context.

It is working !

Hope this is useful for you guys, have a nice day! Carlo

Carlo
  • 31
  • 3