3

I have tried implementing a chatbot in OpenAI with Javascript, using the official OpenAI npm dependency.

The way i have solved it, is that i have an array of chat messages, that gets joined by newlines, and sent as the prompt to the API.

Example:

arr.push("This is a conversation between you and an AI")
arr.push("You: Hello, how are you doing")
arr.push("AI: I'm great, how about you?")
arr.push("You: I'm good, thanks!")

I then push the next question asked to the array, and then push an empty "AI:" string for the OpenAI-endpoint to complete.

The resulting prompt for the API to complete looks like this

```
This is a conversation between you and an AI
You: Hello, how are you doing
AI: I'm great, how about you?
You: I'm good, thanks!
You: How's the weather today?
AI:
```

The response will then also be pushed to the array, so the conversation can continue... (at this time i only send the last ~20 lines from the array) However, the problem i have is that the "bot" will start repeating itself, seemingly at random times it will start answering something like "great, how about you?", and whatever you send as the last question in the prompt, that will be the answer"

Example:

```
This is a conversation between you and an AI
You: Hello, how are you doing
AI: I'm great, how about you?
You: I'm good, thanks!
You: How's the weather today?
AI: It is looking great!
You: That's nice, any plans for today?
AI: It is looking great!
You: What are you talking about?
AI: It is looking great!
```

The only relevant thing i seem to have found in the documentation is the frequency_penalty and the presence_penalty. However, changing those doesnt seem to do much.

This is the parameters used for the examples above:

    const completion = await openai.createCompletion("text-davinci-001", {
        prompt: p,
        max_tokens: 200,
        temperature: 0.6,
        frequency_penalty: 1.5,
        presence_penalty: 1.2,


    });

    return completion.data.choices[0].text.trim()

I have of course also tried with different combinations of temperatures and penalties. Is this just a known problem, or am i misunderstanding something?

  • I was wondering if this is the only way for the system to maintain the context of a conversation. Is that what you have found (sending back previous prompts and answers along with a new one)? – JHolmes May 06 '22 at 04:24

1 Answers1

2

Frequency and Presence penalties have a maximum value of 1 - I'm not sure how the API handles values over that.

Try text-davinci-003, the newest version (as of 19/1/2023) - here's an official example of a chat-bot prompt. The temperature is set to .9 for creativity, and the presence penalty is .6 to avoid topic repetition.

While not recommended, you could try the Base Series model davinci, which is a bit of a loose canon.

thorin9000
  • 171
  • 1
  • 6