0

I got into the API beta and I'm playing around with an app. I got as far as getting the API connection working and doing what I want in pycharm, but have a couple problems:

  1. I'm getting pretty slow response times and hitting a usage cap frequently as well (the API account is sufficiently funded). I assume some of this will improve as the new product stabilizes? Would rather not switch to an earlier model for my use case.

  2. I'm asking GPT to give me a list of items in a python list format, which I am able to typecast into an actual list. If I set the temperature too low I get back repetitive items, but if I set it too high I don't get the correct python formatting.

  3. I'm hitting the API 5 or 6 times which could probably be consolidated down to a couple, but that would depend on consistently getting a properly formatted JSON response, which seems more dubious than asking for a python list.

Basically, is this thing predictable enough that you can ask for a certain data format and it will come in that format reliably enough to build an app on top of?

Any suggestions/discussion is appreciated.

What have I tried: Tried various temperatures. Have asked OpenAI to increase usage cap. Have not tried other models.

1 Answers1

0

Try setting the "system" prompt to contain information about how you want the messages to be formatted. It is usually very consistent if you do that, but on your client, you should still verify that it was formatted correctly and possibly re-do the request and error out after too many attempts.

Modifying the temperature and top_p values haven't been really produced better results for any use cases I've come across. I did notice that higher temperature seems to make it take longer, often timing out.

Fragsworth
  • 33,919
  • 27
  • 84
  • 97