System Specifications: - Device:- NVIDIA Jetson AGX Xavier [16 GB] - Jetpack 4.5.1 8 core CPU RAM:- 32 GB The Pipeline looks like
nlp = stanza.Pipeline('en', use_gpu=True, batch_size=100, tokenize_batch_size = 32, pos_batch_size = 32, , depparse_batch_size = 32)
doc = nlp(corpus)
I am trying to build a Stanza Document with processors:- tokenizer, pos, depparse, error, sentiment, ner; While using a dataset of around 300MB of txt to build the Stanza Document i am running out of memory (RAM) and then the jupyter notebook stops and kernel dies, even with 100MB of data the kernel dies. (I have tried using higher batch sizes and even lower as well but the problem persists)