3

I am running a Stanford CoreNLP server:

java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9001 -timeout 50000

Whenever it receives some text, it outputs it in the shell it is running it. How to prevent this from happening?


It that matters, here is the code I use to pass data to Stanford Core NLP Server:

'''
From https://github.com/smilli/py-corenlp/blob/master/example.py
'''
from pycorenlp import StanfordCoreNLP
import pprint

if __name__ == '__main__':
    nlp = StanfordCoreNLP('http://localhost:9000')
    fp = open("long_text.txt")
    text = fp.read()
    output = nlp.annotate(text, properties={
        'annotators': 'tokenize,ssplit,pos,depparse,parse',
        'outputFormat': 'json'
    })
    pp = pprint.PrettyPrinter(indent=4)
    pp.pprint(output)
Franck Dernoncourt
  • 77,520
  • 72
  • 342
  • 501

3 Answers3

5

There's currently not a way to do this, but you're the second person that's asked. So, it's now in the Github code, and will make it into the next release. For the future, you should be able to set the -quiet flag, and the server will not write to standard out.

Gabor Angeli
  • 5,729
  • 1
  • 18
  • 29
1

I asked the same question and can offer some kind of workaround. I am running the server at the moment in a virtual machine. In order to prevent the logging output for now I run it with 2&>1 >/dev/null pipe arguments:

java -mx6g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -prettyPrint false 2&>1 >/dev/null

This gives a considerable performance boost until we wait for 3.6.1.

Community
  • 1
  • 1
Stefan Falk
  • 23,898
  • 50
  • 191
  • 378
  • I added '-quiet' to the command line, and it seems to work, though processing a lot of rows still takes some time – jrdunson Dec 11 '19 at 16:36
1

When using the Stanford CoreNLP Client from the Python stanza library¹, you can pass the be_quiet option to turn of logging from the server.

nlp = StanfordCoreNLP('http://localhost:9000', be_quiet=True)

¹ Which the original question isn't, but a future visitor to this question might be.

TuringTux
  • 559
  • 1
  • 12
  • 26