I'm playing with the GPT-3 API of OPENAI but I struggle to find a way to make long enough generated text.
Here is my piece of code :
import os
import openai
# export OPENAI_API_KEY='get_key_from_openai'
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.Completion.create(
model="text-davinci-002",
prompt="How to choose a student loan",
temperature=0.6,
max_tokens=512,
top_p=1,
frequency_penalty=1,
presence_penalty=1,
n= 10
)
print(response['choices'][0]['text'])
An example output I have is
"There are a few things to consider when choosing a student loan, including the interest rate, repayment options, and whether the loan is federal or private. You should also compare loans to see which one will cost you the least amount of money in the long run"
However, there are ~50 words which shouldn't be close to 80-100 tokens. I also thought that the n
parameter was supposed to run n
consecutive generated texts ?
Can someone explain how to make this generated text longer (ideally ~1000 tokens) ? Some huggingface models have a min_tokens
parameter but I couldn't find it there.
Thanks a lot