I'm using langchain to query a MySQL database, but langchain agents always go over OpenAI's 4k token limit. When I looked into the agent's conversation history, it seems like the agent called schema_sql_db
multiple times and the table schemas took up a lot of my tokens.
Is there a way for me to intervene and remove the schemas from my conversation histories, and also summarize the agent's history when it gets too long?
Thanks!