1

Hi I'm working on a project to create a conversation agent to help with critical literacy using OpenAI's API. I want the model (currently using gpt-3.5-turbo) to call from data stored in a directory, which is where I'm using modules from langchain. The barebones script I have written so far is working but I am confused as to how/if I can improve it the way I want to. I'm new to this but from tutorials I've read online I see that you are able to give the system a "role".

For example:

prompt = “You're a nutritionist chatbot that creates customized meal plan
Only answer questions related to nutrition.
Only ask questions related to nutrition, health and meal plans.

messages = [
{
"role": "system",
"content": prompt
}
]

Due to how my script has been written utilizing the langchain modules, I'm unsure as to how I can incorporate this. I'll leave snippets of my code below and if anyone can help I'd really appreciate it. Thanks.

# Import necessary modules
import os
import sys

# Importing OpenAI library
import openai

# Import modules from langchain package
from langchain.document_loaders import DirectoryLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI

# Set the OpenAI API key using the APIKEY
os.environ['OPENAI_API_KEY'] = "sk-xxxx"

# Initialize query variable to None
query = None

# Check if command-line arguments were provided
if len(sys.argv) > 1:
    query = sys.argv[1]

# Create the index from the data in the "data/" directory
loader = DirectoryLoader("Data Files/FallacyInfo")
index = VectorstoreIndexCreator().from_loaders([loader])

# Create a ConversationalRetrievalChain using the ChatOpenAI language model and the index as the retriever
chain = ConversationalRetrievalChain.from_llm(
    llm=ChatOpenAI(model="gpt-3.5-turbo"),
    retriever=index.vectorstore.as_retriever(search_kwargs={"k": 1}),
)

# Initialize an empty list to store the chat history
chat_history = []

# Start an infinite loop to keep the chat session running
while True:

    # If query is not provided, get user input as the query (prompt)
    if not query:
        query = input("You: ")

    # Check if the user wants to quit the chat session
    quit_keywords = ['quit', 'q', 'exit']
    if query.lower() in quit_keywords:
        print("FallacyFinder: Goodbye! Have a great day!")
        break

    # Use the conversational retrieval chain (chain) to get a response for the current query
    result = chain({"question": query, "chat_history": chat_history})

    # Print the AI Assistant's answer from the response
    print("FallacyFinder:", result['answer'])

    # Append the user query and AI assistant's response to the chat history
    chat_history.append(("You: " + query, "FallacyFinder: " + result['answer']))

    # Reset the query to None to allow the user to input a new query in the next iteration
    query = None

For reference I have followed the below tutorial -

video:

https://www.youtube.com/watch?v=9AXP7tCI9PI&list=LL&index=2&pp=gAQBiAQB

GitHub:

https://github.com/techleadhd/chatgpt-retrieval/tree/main

eglease
  • 2,445
  • 11
  • 18
  • 28

0 Answers0