1

I'm using langchain to query a MySQL database, but langchain agents always go over OpenAI's 4k token limit. When I looked into the agent's conversation history, it seems like the agent called schema_sql_db multiple times and the table schemas took up a lot of my tokens.

Is there a way for me to intervene and remove the schemas from my conversation histories, and also summarize the agent's history when it gets too long?

Thanks!

David
  • 21
  • 2

2 Answers2

1

I am also facing this issue. However @Sam mentioned in my post that we can edit the sql_database.py agent from langchain library.

Langchain's SQLDatabaseSequentialChain to query database

You can find the library in C:\Users\<username>\AppData\Local\Programs\Python\Python39\Lib\site-packages\langchain if you are in windows.

sql_databse.py is not the only file you may want to edit. I am also working on this and finding it difficult to perform.

Hope you find a solution

naam_nrj
  • 33
  • 6
0

There are a various ways to limit this token issue. I recommend you to use pinecone or weaviate databases if possible, but if not try using Memory types- ConversationBufferMemory, ConversationSummaryMemory, ConversationBufferWindowMemory and there are more. Summerizing your content and you can make a separate emory class also.

For more information-check out this python code