This might be too stupid to ask ... but ...
When using LSTM after the initial Embedding
layer in Keras (for example the Keras LSTM-IMDB tutorial code), how does the Embedding
layer know that there is a time dimension? In another word, how does the Embedding layer know
the length of each sequence in the training data set? How does the Embedding layer know I am training on sentences, not on individual words? Does it simply infer during the training process?