1

In Langchain, what is the suggested way to build a chatbot with memory and retrieval from a vector embedding database at the same time?

The examples in the docs add memory modules to chains that do not have a vector database. Related issue.

Rexcirus
  • 2,459
  • 3
  • 22
  • 42
  • There are multiple ways of doing this, but this Weaviate blog post might be helpful https://weaviate.io/blog/combining-langchain-and-weaviate – Bob van Luijt Mar 31 '23 at 13:11

1 Answers1

-1

You can use ConversationChain class to build a chatbot with memory and retrieval. This class combines the functionality of the MemoryChain and RetrievalChain classes.

To use the ConversationChain class, you first need to create an instance of the class and pass the LLM and memory to the constructor. The VectorStoreRetrieverMemory class is used to retrieve documents from a vector embedding database. You can then use the generate() method to generate a response to a user query. The generate() method takes the query as an argument and returns a string containing the response.

import langchain

from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import VectorStoreRetrieverMemory

llm = OpenAI()
memory = VectorStoreRetrieverMemory(connection_string="postgresql://localhost:5432/db_name")

chain = ConversationChain(llm=llm, memory=memory)

while True:
    query = input()
    response = chain.generate(query)
    print(response)
Codemaker2015
  • 12,190
  • 6
  • 97
  • 81
  • The answer is generated by Google Bard. I got very similar answer when copy the question inside the Google Bart. It also failed by bot test https://www.zerogpt.com/ and https://gptzero.me/ – Hongbo Miao Sep 02 '23 at 22:57