In Langchain, what is the suggested way to build a chatbot with memory and retrieval from a vector embedding database at the same time?
The examples in the docs add memory modules to chains that do not have a vector database. Related issue.
In Langchain, what is the suggested way to build a chatbot with memory and retrieval from a vector embedding database at the same time?
The examples in the docs add memory modules to chains that do not have a vector database. Related issue.
You can use ConversationChain
class to build a chatbot with memory and retrieval. This class combines the functionality of the MemoryChain
and RetrievalChain
classes.
To use the ConversationChain
class, you first need to create an instance of the class and pass the LLM and memory to the constructor. The VectorStoreRetrieverMemory
class is used to retrieve documents from a vector embedding database. You can then use the generate()
method to generate a response to a user query. The generate()
method takes the query as an argument and returns a string containing the response.
import langchain
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import VectorStoreRetrieverMemory
llm = OpenAI()
memory = VectorStoreRetrieverMemory(connection_string="postgresql://localhost:5432/db_name")
chain = ConversationChain(llm=llm, memory=memory)
while True:
query = input()
response = chain.generate(query)
print(response)