0

The story I have a data-set of ECG signal recordings which is shaped (162 patient,65635 sample), and I got the continuous wavelet transform of these recording so that the result is shaped(162 patient,65635 sample, 80 coefficient) which is very large to fit in memory (40 MB) so I saved each instance of these as .npz matrix and used keras generators in training, I use LSTM, and convolution layrs and CPU and the training is very slow.

Questions

  1. what are the best strategies to deal with this problem?

  2. how to decrease the size of the coefficient matrix resulting from cwt?

Mohammed Khalid
  • 155
  • 1
  • 6
  • Instead of loading the entire dataset onto memory, how about streaming portions of data on the go using something like an [ImageDataGeneator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator)? Also, note that using CPU to train deep neural networks take a lot of time. If you want to prioritize speed, use cloud platforms such as AWS which make use of GPU computing power. – Jake Tae Feb 09 '20 at 23:28
  • I use a custom keras generator to load the data as patches, and it turned out to be that the long sequence ( 65635 LSTM) is the main cause of this slow down – Mohammed Khalid Feb 10 '20 at 09:31

0 Answers0