Is there any way to access the retrieved vectordb information (imported as context
in the prompt)?
Here is a sample code snippet I have written for this purpose, but the output is not what I expect:
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.prompts.prompt import PromptTemplate
from langchain.vectorstores import Annoy
texts = ["1st line to be embedded......",
"2nd line to be embedded......",]
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
vectordb = Annoy.from_texts(texts=texts, embedding=embeddings)
retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={'k': 1})
prompt_template = """Return these APIs, exactly as provided to you, back to the user.
APIs: {context}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=['context']
)
chain_type_kwargs = {"prompt": PROMPT}
qa = RetrievalQA.from_chain_type(
llm=model, # could be any LLM
chain_type="stuff",
retriever=retriever,
chain_type_kwargs=chain_type_kwargs,
)
print(qa.run(query))
Basically I want to know how to get the retrieved external dataset (deterministically, not using the LLM).