0

I am working on image classification. I am working remotely, the work station has a memory of about 65GB. I am training about 100, 3d medical images using transfer learning. However, the memory on spyder IDE shows gradual increase until the code breaks down with a read rectangular error. How can I solve this problem?

The code is as follows:

batch_size = 32
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
        '/home/idu/Desktop/COV19D/train/', # 100 ct scan images' slices (70-100 slices each)
        target_size=(512, 512), 
        batch_size=batch_size,
        classes = ['covid','non-covid'],
        color_mode='grayscale',
        class_mode='binary')

SIZE = 215
VGG_model = VGG16(include_top=False, weights=None, input_shape=(SIZE, SIZE, 1))

for layer in VGG_model.layers:
    layer.trainable = False    

feature_extractor=VGG_model.predict(train_generator)
Kenan Morani
  • 141
  • 1
  • 9
  • 1
    If you have 100 images of roughly 100 slices, a total of 65GB RAM means each slice must be smaller than 6,5MB in the optimal case. How large is a slice on average? – MisterMiyagi Aug 28 '21 at 10:20
  • Thank you for your comment. The slice is on average of size 128KB or so. It is 512x512, grayscale image, of jpg format. – Kenan Morani Aug 28 '21 at 10:24

0 Answers0