I had some difficulties training SyntaxNet POS tagger and parser training and I could find a good solution which I addressed in Answers section. if you have got stuck in one of the following problem this documentation really helps you:
- the training, testing, and tuning data set introduced by Universal Dependencies were at
.conllu
format and I did not know how to change the format to.conll
file and also after I foundconllu-formconvert.py
andconllu_to_conllx.pl
I still didn't have a clue about how to use them. If you have some problem like this the documentation has a python file namedconvert.py
which is called in the main body oftrain.sh
and[train_p.sh][5]
to convert the downloded datasets to readble files for SyntaxNet. - whenever I ran bazel test, I was told to run bazel test on one of stackoverflow question and answer, on
parser_trainer_test.sh
it failed and then it gave me this error intest.log
:path to save model cannot be found : --model_path=$TMP_DIR/brain_parser/greedy/$PARAMS/ model
the documentation splited train POS tagger and PARSER and showed how to use different directories in parser_trainer
and parser_eval
. even if you don't want to use the document it self you can update your files based on that.
3. for me training parser took one day so don't panic it takes time "if you do not use use gpu server" said disinex