0

I am trying to use this repository to create semantic search for youtube videos using OpenAI + Pinecone but I am hitting a 429 error on this step - "Run the command npx tsx src/bin/process-yt-playlist.ts to pre-process the transcripts and fetch embeddings from OpenAI, then insert them into a Pinecone search index."

Any help is appreciated!!

Attached is my openai.ts file

import pMap from 'p-map'
import unescape from 'unescape'

import * as config from '@/lib/config'

import * as types from './types'

import pMemoize from 'p-memoize'
import pRetry from 'p-retry'
import pThrottle from 'p-throttle'

// TODO: enforce max OPENAI_EMBEDDING_CTX_LENGTH of 8191

// https://platform.openai.com/docs/guides/rate-limits/what-are-the-rate-limits-for-our-api
// TODO: enforce TPM
const throttleRPM = pThrottle({
  // 3k per minute instead of 3.5k per minute to add padding
  limit: 3000,
  interval: 60 * 1000,
  strict: true
})

type PineconeCaptionVectorPending = {
  id: string
  input: string
  metadata: types.PineconeCaptionMetadata
}

export async function getEmbeddingsForVideoTranscript({
  transcript,
  title,
  openai,
  model = config.openaiEmbeddingModel,
  maxInputTokens = 100, // TODO???
  concurrency = 1
}: {
  transcript: types.Transcript
  title: string
  openai: types.OpenAIApi
  model?: string
  maxInputTokens?: number
  concurrency?: number
}) {
  const { videoId } = transcript

  let pendingVectors: PineconeCaptionVectorPending[] = []
  let currentStart = ''
  let currentNumTokensEstimate = 0
  let currentInput = ''
  let currentPartIndex = 0
  let currentVectorIndex = 0
  let isDone = false

  // const createEmbedding = pMemoize(throttleRPM(createEmbeddingImpl))

  // Pre-compute the embedding inputs, making sure none of them are too long
  do {
    isDone = currentPartIndex >= transcript.parts.length

    const part = transcript.parts[currentPartIndex]
    const text = unescape(part?.text)
      .replaceAll('[Music]', '')
      .replaceAll(/[\t\n]/g, ' ')
      .replaceAll('  ', ' ')
      .trim()
    const numTokens = getNumTokensEstimate(text)

    if (!isDone && currentNumTokensEstimate + numTokens < maxInputTokens) {
      if (!currentStart) {
        currentStart = part.start
      }

      currentNumTokensEstimate += numTokens
      currentInput = `${currentInput} ${text}`

      ++currentPartIndex
    } else {
      currentInput = currentInput.trim()
      if (isDone && !currentInput) {
        break
      }

      const currentVector: PineconeCaptionVectorPending = {
        id: `${videoId}:${currentVectorIndex++}`,
        input: currentInput,
        metadata: {
          title,
          videoId,
          text: currentInput,
          start: currentStart
        }
      }

      pendingVectors.push(currentVector)

      // reset current batch
      currentNumTokensEstimate = 0
      currentStart = ''
      currentInput = ''
    }
  } while (!isDone)
  let index = 0;

  console.log("Entering embeddings calculation")
  // Evaluate all embeddings with a max concurrency
  // const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
  const vectors: types.PineconeCaptionVector[] = await pMap(
    pendingVectors,
    async (pendingVector) => {
      // await delay(6000); // add a delay of 1 second before each iteration
      console.log(pendingVector.input + " " + model)


      // const { data: embed } = await openai.createEmbedding({
      //   input: pendingVector.input,
      //   model
      // })

      async function createEmbeddingImpl({
        input = pendingVector.input,
        model = 'text-embedding-ada-002'
      }: {
        input: string
        model?: string
      }): Promise<number[]> {
        const res = await pRetry(
          () =>
            openai.createEmbedding({
              input,
              model
            }),
          {
            retries: 4,
            minTimeout: 1000,
            factor: 2.5
          }
        )
      
        return res.data.data[0].embedding
      }

      const embedding = await pMemoize(throttleRPM(createEmbeddingImpl));
      

      const vector: types.PineconeCaptionVector = {
        id: pendingVector.id,
        metadata: pendingVector.metadata,
        values: await embedding(pendingVector)
      }
      console.log(index + " THIS IS THE NUMBER OF CALLS TO OPENAI Embedding: " + embedding)
      index++;
      return vector
    },
    {
      concurrency
    }
  )

  return vectors
}

function getNumTokensEstimate(input: string): number {
  const numTokens = (input || '')
    .split(/\s/)
    .map((token) => token.trim())
    .filter(Boolean).length

  return numTokens
}

I've tried increasing the amount of time between api calls to well below the limit but I am somehow still getting the same error.

  • What plan are you on and how many tokens are your messages? – asportnoy Mar 17 '23 at 03:32
  • I've used $.62 of my $18 from the free trial period, and my messages are all around this length - "The CLA Project is a video series i did with/for Mercedes-Benz featuring their new car the CLA. The video's creative was done entirely by me and my crew with Mercedes-Benz blessing.", – Helpinghand Mar 17 '23 at 16:36
  • Should be fine. That text is 40 tokens, or 8,000 TPM on the ADA model, which is well under the 150,000 TPM limit for free trials. – asportnoy Mar 17 '23 at 19:12
  • I thought so as well! Do you see anything else that could be wrong with the file or repo at large? – Helpinghand Mar 18 '23 at 00:49
  • Most likely, something is causing it to make more requests than you're intending. – asportnoy Mar 18 '23 at 03:41

1 Answers1

0

OpenAI sends a 429 Rate Limit error if you don't have any credits. I had been using free credits that expired after 3 months. You can see your available credits on the Usage page:

https://platform.openai.com/account/usage

Side note: once I put a credit card on file it took about 5 minutes for the rate limit to go away

bendytree
  • 13,095
  • 11
  • 75
  • 91