-1

I want to ask if there's a way to properly use OpenAI API to generate complete responses even after the max token limit. I'm using the official OpenAI python package but can't find any way to replicate that in GPT-3 (text-davinci-003) since it doesn't support chat interface.

My code for this is currently like this


response = openai.Completion.create(

        model="text-davinci-003",

        prompt=prompt,

        max_tokens=2049-len(prompt)

      )

      text = response.choices[0].text.strip()
Khaled
  • 1,343
  • 1
  • 6
  • 10
Pranav Purwar
  • 49
  • 1
  • 10
  • 1
    `max_tokens=2049-len(prompt)` doesn't make sense. Tokens are not characters. Counting the number of tokens in a string is a very specific process, but you can also estimate it; one token is about 4 characters. So, it's quite likely you getting string length confused with the number of tokens is the issue here: [What are tokens and how to count them?](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them) – Random Davis Apr 05 '23 at 17:16
  • hmm thanks, I'm using tiktoken now – Pranav Purwar Apr 06 '23 at 01:12
  • Did you resolve this? I have the same problem and the tokens counter seems not to count like Chat GPT does in a request: https://stackoverflow.com/questions/76661442/chat-gpt-tokens-count-does-not-match – Jota Jul 11 '23 at 11:36

1 Answers1

0

To continue a conversation, use the "Continue" command or copy a few lines and ask to continue from there. For scripts, codes, or other content, break it down into smaller parts and post them separately.