2

This might be too stupid to ask ... but ...

When using LSTM after the initial Embedding layer in Keras (for example the Keras LSTM-IMDB tutorial code), how does the Embedding layer know that there is a time dimension? In another word, how does the Embedding layer knowthe length of each sequence in the training data set? How does the Embedding layer know I am training on sentences, not on individual words? Does it simply infer during the training process?

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120
Zane
  • 91
  • 1
  • 7
  • Keras doesn't know, you do. How does an lstm know that what you are passing is a time series? You could train it on images and it wouldn't know, but you would... Not sure this is what you were asking for though – gionni Aug 03 '17 at 15:57

1 Answers1

1

Embedding layer is usually either first or second layer of your model. If it's first (usually when you use Sequential API) - then you need to specify its input shape which is either (seq_len,) or (None,). In a case when it's second layer (usually when you use Functional API) then you need to specify a first layer which is an Input layer. For this layer - you also need to specify shape. In a case when a shape is (None,) then an input shape is inferred from a size of a batch of data fed to a model.

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120