I'm trying to implement the Tensorflow Textsum model on my own data. The training set contains around 50k articles. I didn't change any default settings in the textsum model. Currently, my 'average loss' is around 1.5 to 3.0. However, the decoded results are lists of 'the', 'of', 'in', 'to' or so.
decoded
of in in and and to to to to to
of a in and and to to to to
cats s s and and and to to
ref
rift between officers and residents as killings persist in south bronx
among deaths in 2016, a heavy toll in pop music
kim jong un says north korea is preparing to test long range missile
I don't expect too precise results at this stage, but I think even a list of random words is better than 'of in in and and and to to'.