I want to parse large number of examples/sentences using syntaxnet model. Currently i am using python subprocess module to run demo.sh file for each example which is taking a lot of time as it might be loading all the trained models again and again. Please tell me any alternative for this.
Asked
Active
Viewed 244 times
2
-
1Are you not putting all the sentences in a file and running demo.sh on that? – kskp Oct 03 '16 at 14:29
1 Answers
0
There are instructions on how to process files:
To change the pipeline to read and write to specific files (as opposed to piping through stdin and stdout), we have to modify the demo.sh to point to the files we want.
https://github.com/tensorflow/models/tree/master/syntaxnet#annotating-a-corpus

m33lky
- 7,055
- 9
- 41
- 48