5

I thought the user parameter is doing this job. But it doesn’t work.

https://platform.openai.com/docs/api-reference/chat

enter image description here

Allen_Tsang
  • 555
  • 1
  • 6
  • 14

1 Answers1

6

You need to refeed your previous responses to maintain context. (The user param is only for OpenAI to monitor abuse). Remember it is a completion AI, meaning that it can only take input and give output. To maintain context, you need to input the context.

Also, keep in mind that the new model, gpt-3.5-turbo processes info differently than the Davinci model.

Davinci input is like this:

//import and configure...

const response = await openai.createCompletion({
  model: "text-davinci-003",
  prompt: "Say this is a test",
  temperature: 0,
  max_tokens: 7,
});

while the gpt-3.5-turbo model is like this:

//import and configure...

const response = await openai.createCompletion({
  model: "gpt-3.5-turbo",
  messages: [
        {role: "user", content: "Say this is a test"},
      ],
  temperature: 0,
});

So it's a little different. If you want to refeed for context, you need to make an input field in the "messages" - something like this...

//import and configure...

const message = "${<user input>}"
const context = "${<user previous messages if any, else empty>}"

const response = await openai.createCompletion({
  model: "gpt-3.5-turbo",
  messages: [
        {role: "system", content: `${context}`},
        {role: "user", content: `${message}`},
      ],
  temperature: 0,
});

The "system" role is for the context, so gpt knows to respond to the user input primarily and not the system input. That can also be a useful field for prefacing user prompts fyi.

Hope that helps.

  • Does OpenAI provide access to those previous user messages or is that something we have to store and populate? – zero_cool Apr 01 '23 at 23:04
  • 1
    Has to be populated – JohnSmith2000 Apr 04 '23 at 19:43
  • 1
    This will easily consume the tokens and become expensive right ? – Bharathvaj Ganesan Apr 28 '23 at 01:13
  • yes, if you add context it will consume additional tokens – kaumnen May 03 '23 at 18:58
  • 1
    it might also save tokens because you may have to ask fewer questions to get the info you truly want. It's a balancing act, feeding too much context can confuse it but lets say I want it to find a bunch of genes then ask a bunch of questions about them. The Context will be the list of Genes that way the Engine doesn't have to solve that same problem of figuring out the right genes over and over and over. The ten genes are in the context. If you only intend on asking one or two questions to get a final answer you shouldn't need context you can provide it in the query itself. – jdmneon Aug 02 '23 at 20:24
  • Should I have to send only the messages sent by the user, or also the responses by gpt model? – IMANUL SIDDIQUE Aug 27 '23 at 10:44
  • Should I have to send only the messages sent by the user, or also the responses by gpt model? – IMANUL SIDDIQUE Aug 27 '23 at 10:45