I have been trying to create a GPT-3.5-turbo chatbot with a Gradio interface, the chatbot works perfectly fine in command line but not when I implement it with Gradio. I am able to send my input and receive the response. However the response then gets returned and Gradio doesn't properly display the result. It replies with the "role" and "content" dictionary keys instead of the chat strings. My goal is to be able to have a simple chat conversation with the history recorded in the web interface.
I have tried returning strings, all sorts of different sections of the dictionary and I'm completely at a loss. Before when I would return a string in the predict function it would complain it wanted something to enumerate. Then I sending a list of strings, and no luck there either. When I return the whole dictionary it doesn't kick an error but it then displays the keys not the values.
Here is a image of the error occurring in the Gradio Interface
Here is my current code:
import openai
import gradio as gr
openai.api_key = "XXXXXXX"
history = []
system_msg = input("What type of chatbot would you like to create? ")
history.append({"role": "system", "content": system_msg})
with open("chatbot.txt", "w") as f:
f.write("System: "+system_msg)
print("Say hello to your new assistant!")
def predict(input, history):
if len(history) > 10:
history.pop(1)
history.pop(2)
history.pop(3)
history.append({"role": "user", "content": input})
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=history)
reply = response["choices"][0]["message"]["content"]
history.append({"role": "assistant", "content": reply})
return history, history
with gr.Blocks() as demo:
chatbot = gr.Chatbot()
state = gr.State([])
with gr.Row():
txt = gr.Textbox(show_label=False, placeholder="What kind of chatbot would you like to create? ").style(container=False)
txt.submit(predict, [txt, state], [chatbot, state])
demo.launch()