0

I am facing an issue while using the latest OpenAI GPT-4-0613 models in streaming mode. Specifically, I have noticed that for some cases, the finish_reason field in the response comes as Null instead of the expected value 'stop' for the last token. I have provided the necessary context along with my question in the prompt. Similar Question is asked in Openai Forum: https://community.openai.com/t/completion-finish-reason-is-missing-when-stream-true/90526

Following code:

response = openai.ChatCompletion.create(
    model='gpt-4-0613',
    messages=[
        {'role': 'user', 'content': "Context along with question"}
    ],
    temperature=0,
    stream=True 
)

for chunk in response:
    print(chunk)

My understanding is that the finish_reason field should indicate 'stop' for the last token when using the streaming mode. However, in certain cases, it appears to be Null. This behavior seems inconsistent with the expected behavior of the model.

Notice: However, gpt-0314 always gives finish_reason as stop for last token

wjandrea
  • 28,235
  • 9
  • 60
  • 81
MAC
  • 1,345
  • 2
  • 30
  • 60

0 Answers0