3

The code below is working. I can curl questions to ChatGPT and it replies on a one-off basis. However, if I try to engage in a conversation that require the state of the previous submissions to be referenced, the chat can not follow.

I would like to know what I need to do (and the code needed) to retain the context of the conversation

const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");

const { Configuration, OpenAIApi } = require("openai");

const configuration = new Configuration({
  apiKey: "sk-my-key",
});
const openai = new OpenAIApi(configuration);

// Set up the server
const app = express();
app.use(bodyParser.json());
app.use(cors())

// Set up the ChatGPT endpoint
app.post("/chat", async (req, res) => {
  // Get the prompt from the request
  const { prompt } = req.body;

  // Generate a response with ChatGPT
  const completion = await openai.createCompletion({
    model: "text-davinci-002",
    prompt: prompt,
  });
  res.send(completion.data.choices[0].text);
});

// Start the server
const port = 8080;
app.listen(port, () => {
  console.log(`Server listening on port ${port}`);
});

CURL being run in new terminal:

curl -X POST -H "Content-Type: application/json" -d '{"prompt":"Hello, how are you doing today?"}' http://localhost:8080/chat
Cerbrus
  • 70,800
  • 18
  • 132
  • 147
William
  • 4,422
  • 17
  • 55
  • 108
  • usually cookies are used to keep the session alive, if you add -v to your curl invocation, do you receive any cookies? – hanshenrik Feb 04 '23 at 10:19

2 Answers2

0

You could pass in some context with your prompt, so save something like the last 10 responses, send it with the new message, then pop the oldest and add the newest response.

For a more efficient method, you could also make it so every 10 exchanges, you ask “summarize this conversation: …” then pass that summary with every message and repeat.

It could also be a good idea to pass a low temperature parameter with your request to keep the chat focused on the topics.

Matt Eng
  • 351
  • 10
  • Another thing I noticed is that when I use the terminal to run curl commands the response by ChatGPT only displays 1 line of text and the rest is cut off. Curious what to do about that and I do not know what the problem is. – William Jan 01 '23 at 16:50
  • You could try specifying max_tokens in the request. The default is 16 so that might explain the answers getting cut off. – Matt Eng Jan 01 '23 at 18:14
0

According to this , you could save your chat history in your prompt, setting roles like system, user, assistant to summarize prior messages. Maintaining prompt before each request can achieve the desired effect.

Vcore
  • 1
  • 1