I am trying to train the Google Syntaxnet model in a different language using the datasets available at http://universaldependencies.org/ and following this tutorial. I edited the syntaxnet/context.pbtxt
file but when I try to run the bazel's script
provided in the guide I got the following error:
syntaxnet/term_frequency_map.cc:62] Check failed: ::tensorflow::Status::OK() == (tensorflow::Env::Default()->NewRandomAccessFile(filename, &file)) (OK vs. Not found: brain_pos/greedy/0/label-map)
My doubt is: I have to provide this file and the other files such as fine-to-universal.map
, tag-map
, word-map
and so on, or the train step have to create them using the training dataset? And if I have to provide them, how can I build them?
Thanks in advance