0

End user can copy tables from a pdf like enter image description here

, paste the text in openai playground

bird_id bird_posts bird_likes
012 2 5
013 0 4
056 57 70
612 0 12

and will prompt the gpt with "Create table with the given text" and gpt generates a table like below: enter image description here

This works well as expected. But when my input text is sizeable (say 1076 tokens), I face the following error:

Token limit error: The input tokens exceeded the maximum allowed by the model. Please reduce the number of input tokens to continue. Refer to the token count in the 'Parameters' panel for more details.

I will use python for text preprocessing and will get the data from UI. If my input is textual data (like passages), I can use the approaches suggested by Langchain. But, I would not be able to use summarization iteratively with tabular text as I might loose rows/columns.

Any inputs how this can be handled?

usr_lal123
  • 650
  • 12
  • 28

1 Answers1

1

In general this cannot be solved for any table size - those models just have limitted context length and that's their hard limitation. As far as I know this is a subject of active research currently, for example: https://arxiv.org/abs/2304.11062 (but this is not implemented in OpenAI side and has it's own limitations and difficulties).

You can try the new gpt-3.5-turbo-16k model, which has context size of 16384 tokens (as compared to 4096 tokens for gpt-3.5-turbo, which you seem to use).

godfryd
  • 582
  • 3
  • 16