Questions tagged [gpt-4]

Use this tag with Generative Pre-trained Transformer 4 (GPT-4). Do not use it with GPT-2/3 or the ad tagging library (GPT).

55 questions
0
votes
0 answers

Unable to read data as Llama index Documents

I'm currently working with llama index trying to parse a column of my pandas dataframe as a Document object with llama index with the final goal of fitting my data into an LLM (I'm using gpt-4-32k). Does anyone know how to do this without explicitly…
0
votes
0 answers

Error when Running Vicuna's FastChat Model without GPU

I wish to use the Vicuna open source model to train my dataset. I don't have a GPU in my computer, so I wanted to use their RESTful API Server. I used Windows PowerShell for the commands below. According to their explanation…
DBM
  • 71
  • 7
0
votes
0 answers

GPT-4 doesn't follow output format instruction occasionally

I am writing a custom wrapper for OpenAI GPT-4 API. I do the prompting similarly to the ReAct model (Thought, Action, Observation, Final Answer). This is my output format instruction for the agent scratchpad Populate the scratchpad (delimited by the…
0
votes
0 answers

I am not getting exact response from my api as i am getting from chat gpt

Can someone explain why I am not getting good enough response. My 3.5 api is generating content that is good enough as gpt's response. my app is about helping recruiters to refine their job posts. but its not working fine. How can I improve the…
0
votes
1 answer

my gpt 3.5 turbo api is not giving good enough response as i can get from chat gpt

So i have implemented chat gpt 3.5 turbo API in my react app. so my app is basically like an assistant to a recruiter. so a recruiter gives a sample job post to the app and it send this post to chat gpt to craft it. now i have different personas to…
0
votes
1 answer

Why does this bundled app not work when the python script does work?

I have essentially zero coding experience. I'm using GPT4 prompts to help me put together a simple print utility that generates a barcode and adds 1 to the last code printed in order for us to organize our inventory. The script works great when run…
Shaken89
  • 1
  • 1
0
votes
1 answer

How to change the QA_PROMPT for my own usecase?

chain I was following this description. I can't understand what QA_PROMPT means, and how I can change it to my own usecase. I checked my Pinecone index but I can't find anything about QA_PROMPT. What should I do? Please help me.
0
votes
1 answer

How to fune-tune and deploy ChatGPT on Cloud?

I know - how to fine-tune the ChatGPT. However, I not able to find out - How we can deploy the fine-tuned model in our server/cloud? Can you anyone please help me with these? I've created the fine-tune model of ChatGPT. But I'm not getting - how to…
0
votes
2 answers

AttributeError: 'tuple' object has no attribute 'is_single_input

I am trying to use langchain Agents in order to get answers for the questions asked using API, but facing error "AttributeError: 'tuple' object has no attribute 'is_single_input'". Following is the code and error. Open for solution and…
0
votes
0 answers

Azure OpenaAI GPT-4 Review version cannot be found in the list of models

I received an email on 18 April confirming that I have been onboarded to the Azure OpenAI Service GPT-4 Preview, but GPT-4 Review version cannot be found in the list of OpenAI models and GPT-4 cannot be deployed, Whether my resource group choice is…
0
votes
0 answers

How can I improve my ChatGPT API prompts?

I am having an issue with ChatGPT API relating to prompt engineering I have a dataset which consists of individual product titles, and product descriptions which was awful design, but I didn't have control over that part. I need to create aggregate…
0
votes
2 answers

Why is GPT-4 giving different answers with same prompt & temperature=0?

This is my code for calling the gpt-4 model: messages = [ {"role": "system", "content": system_msg}, {"role": "user", "content": req} ] response = openai.ChatCompletion.create( engine = "******-gpt-4-32k", messages =…
0
votes
1 answer

how to determine the expected prompt_tokens for gpt-4 chatCompletion

For the following nodejs code below I am getting prompt_tokens = 24 in the response. I want to be able to determine what the expected prompt_tokens should be prior to making the request. import { Configuration, OpenAIApi } from 'openai'; …
Shivam Sinha
  • 4,924
  • 7
  • 43
  • 65
0
votes
0 answers

TextCompleition Latency with Large Prompts - How to Avoid?

We've been experimenting back and forth between text completion and Chat completion to build an interactive AI. What we've found is with Text completion the AI follows instructions much better, but after a number of messages being added to the…
0
votes
0 answers

gpt-35-turbo does not memorize messages in PHP

I created a Telegram bot that responds to user messages using the OpenAI GPT API. Everything works fine, but there is an issue. With the gpt-35-turbo model, it is possible to add parameters to memorize messages and track conversations, which I did,…