2

I'm new in python and i want a little hand into this code. I'm developing a smart chatbot using the openai API and using it in what's app. I have this piece of my code that is responsible for the chatgpt response in my code. At the moment, this code is on model = "text-davinci-003" and i want to turn it into "gpt-3.5-turbo". Is any good soul interested in helping me?

Obs.: "msg" is what we ask to chatgpt on whatsapp

The piece of my code:

msg = todas_as_msg_texto[-1]
print(msg) # -> Mensagem que o cliente manda (no caso eu)

cliente = 'msg do cliente: '
texto2 = 'Responda a mensagem do cliente com base no próximo texto: '
questao = cliente + msg + texto2 + texto

# #### PROCESSA A MENSAGEM NA API DO CHAT GPT ####

openai.api_key= apiopenai.strip()

response=openai.Completion.create(
    model="text-davinci-003",
    prompt=questao,
    temperature=0.1,
    max_tokens=270,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0.6,
)

resposta=response['choices'][0]['text']
print(resposta)
time.sleep(1)
    
Derek O
  • 16,770
  • 4
  • 24
  • 43

2 Answers2

1

To update your code to gpt-3.5-turbo, there are four areas you need to modify:

  1. Call openai.ChatCompletion.create instead of openai.Completion.create
  2. Set model='gpt-3.5-turbo'
  3. Change messages= to an array as shown below
  4. Change the way you are assigning repsonse to your resposta variable so that you are reading from the messages key

This tested example takes into account those changes:

response=openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": questao }],
    temperature=0.1,
    max_tokens=270,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0.6,
)

resposta=response['choices'][0]['message']['content']

Additionally, since more than one choice can be returned from the model, instead of only looking at [0] you may be interested in iterating over them to see what you're getting, something like:

for choice in response.choices:
            outputText = choice.message.content
            print(outputText)
            print("------")
print("\n")

Note that you don't need to do that if you are calling openai.ChatCompletion.create with 'n=1'

Additionally, your example is setting both temperature and top_p, however the docs suggest to only set one of those variables.

busse
  • 1,761
  • 14
  • 25
  • Hi Derek! Thanks for the help. There is only another thing that i want to know. Instead of passing "questão" on messages field, before, I want to pass something to the prompt for example: "You are going to be a medical secretary...." and create a loop of a conversation based on previous questions and answers. Thanks. – Luigi Toniolo Apr 18 '23 at 15:25
  • Hi @LuigiToniolo -- Derek was the person who edited your original question; I posted the answer that you've commented on (please take a moment to [up vote / accept the answer](https://stackoverflow.com/help/someone-answers) if it addressed your question as posted). Regarding your follow up around looping a conversation, that is probably best for a new question, but in general you want to look at how to use `"role": "assistant"` in the `messages` parameter array. – busse Apr 18 '23 at 18:35
  • I'm sorry, busse. I'm new here... – Luigi Toniolo Apr 19 '23 at 22:49
0

You can try this

import requests
import json
# Create HTTP client and request objects
httpClient1 = requests.Session()
url = "https://api.openai.com/v1/chat/completions"
headers = {"Authorization": "Bearer " + OpenaiApiKey}
prompt = "Hello, how are you?"
request1 = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": prompt}]
}
# Send the request and wait for the response
response1 = httpClient1.post(url, headers=headers, json=request1)
responseContent1 = response1.content
# Deserialize the JSON response and extract the generated text
responseObject1 = json.loads(responseContent1.decode('utf-8'))
results = responseObject1["choices"][0]["text"]
Raphael Mutiso
  • 63
  • 1
  • 11