i'm training an encoder rnn for sequence to sequence model with batches of 10 sentences . every batches have 10 sentences. and every sentences have 60 words.
in the encoder network of the seq2seq model what should be the value of the "input_lenghts" ?
it should be the number of words in each sentence [60] ? or it should be the number of sentences in each batch [10] ?