I am trying to train an autoencoder using TensorFlow and Keras. My training data has more than 200K 512x128 unlabeled images. If I want to load the data in a matrix, its shape will be (200000, 512, 128, 3). That is a few hundred GB of RAM space. I know I can reduce the batch size while training but that is for limiting memory usage in GPU/CPU.
Is there a workaround to this problem?