2

I learned from examples on the internet that when processing time series with RNN or LSTM, the time series should be divided into overlapping time windows like that:

[1,2,3,4,5,6] => [[1,2,3],[2,3,4][3,4,5][4,5,6]]

This was quite a surprise for me since I thought that sequence learning was kind of built into a recurrent network.

  1. Does it mean the topology from above can only learn sequences of 3 elements?
  2. Does it mean I can feed the time windows in random order?
  3. If not, why bother splitting the sequence into time windows instead of simply feeding the net element by element?
York Yang
  • 527
  • 7
  • 13
Andrzej Gis
  • 13,706
  • 14
  • 86
  • 130

0 Answers0