0

Here is what I am doing.. Basically I am retrieving indexed embedding from Pinecone Vector Database and trying to answer a question but when the answer is not available the query grab answers from chatgpt LLM and how can I limit in a way that the program only talks to the previously indexed embedding and no answers means I want to get no answers basically. Appreciate your help.

import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain.vectorstores import Pinecone
import pinecone
pinecone.init(
    api_key="{pinecone_api key}",
    environment="asia-southeast1-gcp-free",
)


os.environ["OPENAI_API_KEY"]="{openAI_API_key}"


def run_llm(query:str):
   embeddings=OpenAIEmbeddings()
   doc_search=Pinecone.from_existing_index(index_name="vector-index1",embedding=embeddings)
   chat=ChatOpenAI(verbose=True,temperature=0)
   qa=RetrievalQA.from_chain_type(llm=chat,chain_type="stuff",retriever=doc_search.as_retriever())
   return qa({"query":query})


print(run_llm(query="what is assignemt operator"))

Above is what I tried and I am still getting unexpected answers from chatgpt LLM if I don't have answers in my Indexed embedding from the retrieved dataset from Pinecone Vector Database .

RF1991
  • 2,037
  • 4
  • 8
  • 17
Andrew Ahn
  • 11
  • 2

0 Answers0