Questions tagged [openai-api]

OpenAI makes several AI products, including ChatGPT, Dall-E, Whisper. Use for questions about the OpenAI API, and not for general support.

Use for questions about using the OpenAI API. All question should be according to the scope of Stack Overflow and following the How to ask a good question and Expected Behavior guidelines.

Before asking a question, read the documentation.

If you will include non-original code, be sure to provide proper attribution.

Don't use for questions about using ChatGPT as an end-user. Don't use for questions about responses given by ChatGPT that aren't directly related to a programming issue using the OpenAI API.

Related tags:


OpenAI Community

https://community.openai.com/

1620 questions
5
votes
1 answer

OpenAI API 404 response

I'm trying to use ChatGPT for my Telegram bot. I used to use "text-davinci-003" model, and it was working fine (even now it's working fine), but I'm not satisfied with its responses. Now I'm trying to change the model to "gpt-3.5-turbo", and it's…
5
votes
1 answer

How to maintain context with OpenAI gpt-3.5-turbo API?

I thought the user parameter is doing this job. But it doesn’t work. https://platform.openai.com/docs/api-reference/chat
Allen_Tsang
  • 555
  • 1
  • 6
  • 14
5
votes
2 answers

OpenAI ChatGPT (GPT-3.5) API error: "openai.createChatCompletion is not a function"

I have this in my MERN stack code file, and it works well. exports.chatbot = async (req, res) => { console.log("OpenAI Chatbot Post"); const { textInput } = req.body; try { const response = await openai.createCompletion({ model:…
WynMars
  • 53
  • 1
  • 5
5
votes
2 answers

Expo react-native Error: URL.search is not implemented when working with openai.createImage General API discussion

this is my function async function generateImage() { try { const response = await openai.createImage({ prompt: 'a white siamese cat', n: 1, size: '1024x1024', }); console.log(response); } catch…
Ahmed0h
  • 193
  • 2
  • 7
4
votes
2 answers

How do I change the default 4 documents that LangChain returns?

I have the following code that implements LangChain + ChatGPT to answer questions from given data: import { PineconeStore } from 'langchain/vectorstores/pinecone'; import { ConversationalRetrievalQAChain } from 'langchain/chains'; const…
imjesr
  • 89
  • 4
4
votes
1 answer

Determine whether OpenAI chat completion will execute function call or generate message

I currently have a chat feature in a NestJS application that uses the openai createChatCompletion API to generate a message based on user input and stream the response back to the client. With the addition of function calls to the openai API…
DeonV
  • 192
  • 6
4
votes
2 answers

openai embedding the same text but return the different vectors

I am trying OpenAI Embedding API now. But I found one issue. When I emebedding the same text again and again, I got the different vectors array. The text content is baby is crying, and the model is text-embedding-ada-002(MODEL GENERATION: V2). I run…
Yumumu
  • 41
  • 2
4
votes
1 answer

How to return streams from node js with openai

I am trying to set up a node/react setup that streams results from openai. I found an example project that does this but it is using next.js. I am successfully making the call and the results are returning as they should, however, the issue is how…
texas697
  • 5,609
  • 16
  • 65
  • 131
4
votes
4 answers

How to resolve 'Import openai could not be resolved' error (pylance) with pip install command?

I feel like I'm asking a dumb question, but I've looked at multiple StackOverflow threads and articles online already but still haven't fixed my problem. I'm trying to use the OpenAI Python library to train a new model, but even after running…
4
votes
0 answers

How to stream data in Vapor Swift?

I'm building an Open AI chat stream backend using Vapor Swift. It connects to Open AI API using MacPaw's OpenAI wrapper. But I'm unsure how to stream the result to the client using SSE rather than as a single response. My current code looks like…
Wendell
  • 474
  • 3
  • 12
4
votes
1 answer

ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources', 'source_documents']. (langchain/Streamlit)

I got an error says ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources', 'source_documents']. Here's the traceback error File…
naranara
  • 151
  • 1
  • 10
4
votes
1 answer

AzureOpenAI and LangChain Weird, multiple answers

I'm using AzureOpenAI to test LangChain's Self Critique using Constitution. It all works, except I get more than one answer, and the weirdest part, it GENERATES random, unwanted questions, and answer them. Here is my Python code (I replaced…
4
votes
2 answers

How to add prompt to Langchain ConversationalRetrievalChain chat over docs with history?

Langchain have added this function ConversationalRetrievalChain which is used to chat over docs with history. According to their documentation here ConversationalRetrievalChain I need to pass prompts which are instructions to the function. How can i…
4
votes
2 answers

OpenAI Chat Completions API: How do I customize answers from GPT-3.5 or GPT-4 models if I can't fine-tune them?

We have seen some companies use GPT-3.5 or GPT-4 models to train their own data and provide customized answers. But GPT-3.5 and GPT-4 models are not available for fine-tuning. I've seen the document from OpenAI about this issue, but I had seen…
Lucien
  • 43
  • 3
4
votes
0 answers

LangChain losing context and timing out when using memory with agent

I've got a function that initializes an agent and makes a call to get the generated reply. It uses a tool that creates structured output from the user's input. Using this without any memory context works fine - a response is returned in the expected…
DeonV
  • 192
  • 6