I am trying to figure out what improvements in the text analysis of long documents can be obtained by using Syntaxnet, rather than something "dumb" like word counts, sentence length, etc.
The goal would be to get more accurate linguistic measures (such as "tone" or "sophistication"), for quantifying attributes of long(er) documents like newspaper articles or letters/memos.
What I am trying to figure out is what to do with Syntaxnet output once the POS tagging is concluded. What types of things do people use to process Syntaxnet output?
Ideally I am looking for an example workflow that transforms Syntaxnet output into something quantitative that can be used in statistical analysis.
Also, can someone point me to sources that show how the inferences drawn from a "smart" analysis with Syntaxnet compare to those that can be attained by word counts, sentence length, etc.?