-1

This "best of" warning results from using the OpenAI API on a PC running Win10.

The Context:

Using the OpenAI API in Jupyter Lab with the ir kernel, with having only the rgpt3 library installed in this Notebook.

The API successfully performs a test code completion. And it does not matter whether the API is making a single or multiple API request, both return the same warning.

The following results when using 3 queries:

[1] "Request: 1/3" To avoid an invalid_request_error, best_of was set to equal n

[1] "Request: 2/3" To avoid an invalid_request_error, best_of was set to equal n

[1] "Request: 3/3" To avoid an invalid_request_error, best_of was set to equal n

After performing multiple unsuccessful web searches - including a search at Stack Overflow for information about these warnings, I found there exists almost no information about this warning anywhere. It's probably too early in the process because the OpenAI API is relatively new to most people.

Therefore, it was decided to post both the question and the answer regarding this warning because otherwise finding such information is very difficult and time consuming. And for those users who are boldly going where few have gone before, errors and warning messages do not inspire confidence.

Rubén
  • 34,714
  • 9
  • 70
  • 166
Gray
  • 1,164
  • 1
  • 9
  • 23

1 Answers1

1

What the error following warning message is all about:

To avoid an invalid_request_error, best_of was set to equal `n

The Best Practices guide at OpenAi website provides the source which describes what "best_of" means. This information is currently available at the following website:

https://beta.openai.com/docs/guides/production-best-practices/improving-latencies

In a nutshell, "best_of" is one of the parameters used to define what we want from the OpenAI website when using the API. Using the OpenAI API involves "tokens" - which is something akin to the metering of a user's usage and rate limits at the OpenAI website. In addition, there are also limitations for most of the models at OpenAI based on the context length - with most models having 2048 max context size.

The Best Practices guide at the OpenAI website suggests the following:

Generate fewer completions: lower the values of n and best_of when possible where n refers to how many completions to generate for each prompt and best_of is used to represent the result with the highest log probability per token.

If n and best_of both equal 1 (which is the default), the number of generated tokens will be at most, equal to max_tokens.

If n (the number of completions returned) or best_of (the number of completions generated for consideration) are set to > 1, each request will create multiple outputs. Here, you can consider the number of generated tokens as [ max_tokens * max (n, best_of) ]

The function used for Requests at the OpenAI website in the Jupyter Notebook has a R wrapper which sends Requests with the range of parameters - including the parameter called best_of. The best_of parameter in the function is already defaulted to equal to 1 and is only changed manually. Copy and paste of this parameter from the function follows:

best_of = 1 

Therefore, it can only be presumed that the OpenAI website auto-generates the "best_of" warning for each "Prompt" for every API Request as a friendly reminder. This warning message can be programmatically ignored and removed if so desired

Gray
  • 1,164
  • 1
  • 9
  • 23