1

I am calling open AI api with below details and I am getting below response

 {'value': {'outputs': [{'finishReason': 'LENGTH', 'text': '\n\nThe summary in this JSON format is as follows:\n\nshort_', 'generationTimestamp': 1689303489, 'trackingId': "Ê.\x00Õ!07´a'"}], 'modelId': 'TEXT_ADA_001'}}

My parameters:

 {
"model_max_tokens": 1024,
"model_id": "TEXT_DAVINCI_003",
"model_temperature": 0
}

Prompt text length is 5903

Response is not complete message and finishReason is stop, How can Handle this and get desired results ?

Helen
  • 87,344
  • 17
  • 243
  • 314
GoneCase123
  • 388
  • 4
  • 15

1 Answers1

1

Based on the provided information, it seems like you're using TEXT_DAVINCI_003 and your prompt token length is 5903.

I have reproduced similar scenario with similar prompt token length.

The model's max context length is 4097 Tokens ("completion_tokens" + "prompt_tokens") and you're request length is 6927 (1024 + 5903) which is higher than the allowed length.

To fix this please reduce the length of your prompt text. enter image description here

If you want to work with higher prompt text length, you can use other models which support higher token length. Please check this documentation for details on other models.

RishabhM
  • 525
  • 1
  • 5