0

What is the max passage limit or hardware limit to use transformer-qa model for reading comprehension in allennlp:

Predictor.from_path('https://storage.googleapis.com/allennlp-public-models/transformer-qa-2020-10-03.tar.gz').predict(passage=passage, question=question)

I'm getting "DefaultCPUAllocator: not enough memory: you tried to allocate 23437770752 bytes. Buy new RAM!" error

1 Answers1

0

I don't think that error message comes from AllenNLP. What are you running when you get it?

That number represents 22GB, which is too much for the TransformerQA model, unless you are sending a really large sequence. Generally, TransformerQA can only do 512 tokens at a time. If your text has more than 512 tokens, it will break it up into multiple sequences of length 512 each. The only limit to how many of these 512-length sequences it creates is the size of your memory and your patience.

Dirk Groeneveld
  • 2,547
  • 2
  • 22
  • 23