0

I had some difficulties training SyntaxNet POS tagger and parser training and I could find a good solution which I addressed in Answers section. if you have got stuck in one of the following problem this documentation really helps you:

  1. the training, testing, and tuning data set introduced by Universal Dependencies were at .conllu format and I did not know how to change the format to .conll file and also after I found conllu-formconvert.py and conllu_to_conllx.plI still didn't have a clue about how to use them. If you have some problem like this the documentation has a python file named convert.py which is called in the main body of train.sh and [train_p.sh][5] to convert the downloded datasets to readble files for SyntaxNet.
  2. whenever I ran bazel test, I was told to run bazel test on one of stackoverflow question and answer, on parser_trainer_test.sh it failed and then it gave me this error in test.log: path to save model cannot be found : --model_path=$TMP_DIR/brain_parser/greedy/$PARAMS/ model

the documentation splited train POS tagger and PARSER and showed how to use different directories in parser_trainer and parser_eval. even if you don't want to use the document it self you can update your files based on that. 3. for me training parser took one day so don't panic it takes time "if you do not use use gpu server" said disinex

Community
  • 1
  • 1
Nazanin Tajik
  • 412
  • 2
  • 15

1 Answers1

4

I got one answer from github from Disindex and I found it very useful. the documentation in https://github.com/dsindex/syntaxnet includes:

convert_corpus
train_pos_tagger
preprocess_with_tagger

As Disindex said and I quote: " I thought you want to train pos-tagger. if then, run ./train.sh"

Nazanin Tajik
  • 412
  • 2
  • 15