I got into the API beta and I'm playing around with an app. I got as far as getting the API connection working and doing what I want in pycharm, but have a couple problems:
I'm getting pretty slow response times and hitting a usage cap frequently as well (the API account is sufficiently funded). I assume some of this will improve as the new product stabilizes? Would rather not switch to an earlier model for my use case.
I'm asking GPT to give me a list of items in a python list format, which I am able to typecast into an actual list. If I set the temperature too low I get back repetitive items, but if I set it too high I don't get the correct python formatting.
I'm hitting the API 5 or 6 times which could probably be consolidated down to a couple, but that would depend on consistently getting a properly formatted JSON response, which seems more dubious than asking for a python list.
Basically, is this thing predictable enough that you can ask for a certain data format and it will come in that format reliably enough to build an app on top of?
Any suggestions/discussion is appreciated.
What have I tried: Tried various temperatures. Have asked OpenAI to increase usage cap. Have not tried other models.