I'm trying to run pycorenlp on a long text and get an CoreNLP request timed out. Your document may be too long
error message. How to fix it? Is there any way to increase Stanford CoreNLP's timed out?
I don't want to segment the text into smaller texts.
Here is the code I use:
'''
From https://github.com/smilli/py-corenlp/blob/master/example.py
'''
from pycorenlp import StanfordCoreNLP
import pprint
if __name__ == '__main__':
nlp = StanfordCoreNLP('http://localhost:9000')
fp = open("long_text.txt")
text = fp.read()
output = nlp.annotate(text, properties={
'annotators': 'tokenize,ssplit,pos,depparse,parse',
'outputFormat': 'json'
})
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(output)
The Stanford Core NLP Server was launched using:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer 9000