Recently Google had released new implementation for seq2seq algorithm: https://github.com/google/seq2seq/blob/master/docs/nmt.md. They had changed the entire internal structure of the code. I tried to modify the code for mine purposes (I created new InputPipeline decoder). When I run the code without any modifications, it works fine: all tasks called and training launches without any problem. After experiment creation started, tensorflow displays the messages about creation of all of the hooks:
...
INFO:tensorflow:Creating ParallelTextInputPipeline in mode=train
...
INFO:tensorflow:Creating ParallelTextInputPipeline in mode=eval
...
INFO:tensorflow:Creating PrintModelAnalysisHook in mode=train
...
INFO:tensorflow:Creating RougeMetricSpec in mode=eval
...
This part works fine. But after that, the original code calls the creation of an AttentionSeq2Seq, vocabulary lookup tables, BidirectionalRNNEncoder, AttentionLayerDot, AttentionDecoder, ZeroBridge tasks in train mode:
INFO:tensorflow:Creating AttentionSeq2Seq in mode=train
...
INFO:tensorflow:Creating vocabulary lookup table of size 34
INFO:tensorflow:Creating vocabulary lookup table of size 46
INFO:tensorflow:Creating BidirectionalRNNEncoder in mode=train
...
And this part with creation of the AttentionSeq2Seq, vocabulary lookup tables, etc. tasks doesn't launch after my modifications. Could you explain me, how these tasks launches in the original version?