0

I have two issues relating to the response result from OpenAI completion.

The following result doesn't return back the full text when I give a content of 500 words and prompt with "Fix grammar mistakes" (Is tokens issue?)

Enter image description here

The second issue is when the text sometimes have some double quotes or single quotes. It messes with the JSON format. So I delete any type of quotes from the content (I am not sure if it's the best solution, but I may prefer doing it in JavaScript, not PHP).

curl_setopt($ch, CURLOPT_POSTFIELDS, "{\n  \"model\": \"text-davinci-001\",\n  \"prompt\": \"" . $open_ai_prompt  . ":nn" . $content_text  . "\",\n  \"temperature\": 0,\n  \"top_p\": 1.0,\n  \"frequency_penalty\": 0.0,\n  \"presence_penalty\": 0.0\n}");

"message": "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Mostafa Ezzat
  • 29
  • 1
  • 11
  • 1
    `when the text sometimes have some double quotes OR single quotes it messes with the JSON format`...the solution to this kind of thing is: don't build your JSON by hand like that. Make a PHP object / array with the correct structure, and then use `json_encode()` to turn it into valid JSON, it will automatically handle any escaping etc which is needed, and you can also use the options to tweak certain things about the output - check the PHP documentation. – ADyson Feb 07 '23 at 12:13
  • @ADyson Thanks I'll try encoding the whole payload how ever I did try encodeing then decoding the text before send it the API and it fails also with JS and PHP. – Mostafa Ezzat Feb 07 '23 at 12:22
  • It's not clear precisely what you tried, from that description, but yes you need to JSON-encode the whole thing, from a PHP object, that will make it a lot more reliable in terms of creating valid JSON – ADyson Feb 07 '23 at 12:23

1 Answers1

3

Regarding token limits

First of all, I think you don't understand how tokens work: 500 words is more than 500 tokens. Use the Tokenizer to calculate the number of tokens.

As stated in the official OpenAI article:

Depending on the model used, requests can use up to 4097 tokens shared between prompt and completion. If your prompt is 4000 tokens, your completion can be 97 tokens at most.

The limit is currently a technical limitation, but there are often creative ways to solve problems within the limit, e.g. condensing your prompt, breaking the text into smaller pieces, etc.

Switch text-davinci-001 for a GPT-3 model because the token limits are higher.

GPT-3 models:

Table


Regarding double quotes in JSON

You can escape double quotes in JSON by using \ in front of double quotes like this:

"This is how you can escape \"double quotes\" in JSON."

But... This is more of a quick fix. For proper solution, see @ADyson's comment above:

Don't build your JSON by hand like that. Make a PHP object / array with the correct structure, and then use json_encode() to turn it into valid JSON, it will automatically handle any escaping etc which is needed, and you can also use the options to tweak certain things about the output - check the PHP documentation.


EDIT 1

You need to set the max_tokens parameter higher. Otherwise, the output will be shorter than your input. You will not get the whole fixed text back, but just a part of it.


EDIT 2

Now you set the max_tokens parameter too high! If you set max_tokens = 5000, this is too much even for the most capable GPT-3 model (i.e., text-davinci-003). The prompt and the completion together can be 4097 tokens.

You can figure this out if you take a look at the error you got:

"error": {"message": "This model's maximum context length is 4097 tokens, however you requested 6450 tokens (1450 in your prompt; 5000 for the completion). Please reduce your prompt; or completion length."}
Rok Benko
  • 14,265
  • 2
  • 24
  • 49
  • I did try the highest model tokens and it success, the idea is even with the success the article is is about 700 word and when I use ```fix grammar mistakes``` for example, the result is not more than 70 word. Do you thing there's another better endpoint to use? – Mostafa Ezzat Feb 07 '23 at 12:21
  • 1
    `You can escape double quotes in JSON by using \ in front of double quotes`...you can, but you shouldn't generate the JSON by hand to begin with - see my first comment on the main thread under the question. – ADyson Feb 07 '23 at 12:24
  • 1
    @MostafaEzzat I edited my answer. You need to set the `max_tokens` parameter higher. – Rok Benko Feb 07 '23 at 12:26
  • @ADyson You're right, will edit my answer. :) – Rok Benko Feb 07 '23 at 12:29
  • "I think you don't understand how tokens work: 500 words is more than 500 tokens. Use Tokenizer to calculate the number of tokens." "You need to set the max_tokens parameter higher! Otherwise, output will be shorter than you input." – I did add max tokens field with 5000, so i tried to omit to figure out the result if it completes so with max_token: 5000 I get this ```string(295) "{ "error": {"message": "This model's maximum context length is 4097 tokens, however you requested 6450 tokens (1450 in your prompt; 5000 for the completion). Please reduce your prompt; or completion length.``` – Mostafa Ezzat Feb 07 '23 at 12:30
  • I got it sorry however the text is pretty short – Mostafa Ezzat Feb 07 '23 at 12:36
  • It says 163 token on the [Tokenizer](https://platform.openai.com/tokenizer) but in the API result it's thousands, May I ask what's the best way to send the text in chunks? – Mostafa Ezzat Feb 07 '23 at 12:39
  • I edited my answer once again. Just divide it into, let's say, 5 parts. Let me know if this solves your problem. – Rok Benko Feb 07 '23 at 12:41
  • May I ask the last thing for the best way for sending the text in chunks to the API? is using spliting the text on each 3-4 new lines for example? – Mostafa Ezzat Feb 07 '23 at 12:50
  • 1
    Hm, I'm not sure if splitting the text on each 3-4 lines will be the best option. Try to split the text on each 3-4 **sentences** instead. Make a test and see what works best. – Rok Benko Feb 07 '23 at 13:02