I'm using the gpt-4-0613
model, with a single function, and some custom data in the system prompt.
If the function is triggered very early in the chat, within the first two requests, it functions just fine, and the API asks the user for the information required to call the function.
However, if the function is called later in the conversation, let's say question 5, the API will just make up answers and send back the function call.
How can I stop the AI from making up answers? There is no way for the API to get these values from the conversation context. They are all 100% made up.
completion = openai.ChatCompletion.create(
model='gpt-4-0613',
messages=prompts,
functions=[
{
"name": "fill_form",
"description": "Helps the user create an XYZ Report",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "the full name of the person issuing this report"
},
"zip": {
"type": "string",
"description": "the 5 digit zip code of the address"
},
"address": {
"type": "string",
"description": "the street address, only the street and not the city, state or zip"
},
"year_end": {
"type": "string",
"description": "the full four digit year of the fiscal year"
},
},
"required": ["name", "address", "year_end", "zip"]
}
}],
)
I've tried with and without the
function_call='auto'
option with no affect.
Thank you for any help.
The API should always ask the users for the values of the function and never make them up.