-2

I am correctly working on a college project which takes the data of retina and classify diabetic retinopathy. It has two columns, one consists the ID number and the second consists of diagnosis value. (The diabetic retinopathy stage the patient is in)

I tried Augmentation, Dropping layers, used L2 regularizers but nothing is helping as validation stays at 77% (highest) while the train accuracy goes more than 90%. Any type of help or advise is highly appreciated. Thank you in advance.

Edit: It has a total of 3663 images on the train set.

The Imports:

import numpy as np
from keras.applications import EfficientNetV2B3, NASNetMobile
from keras.layers import GlobalAveragePooling2D, Dense, Concatenate
from keras.models import Model
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import np_utils

Adding layers and dropouts.

from keras.layers import Dropout
from keras.regularizers import l2

effnet_base_model = EfficientNetV2B3(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
effnet_x = effnet_base_model.output
effnet_x = GlobalAveragePooling2D()(effnet_x)

# effnet_x = Dense(1024, activation='relu')(effnet_x)
effnet_x = Dense(1024, activation='relu', kernel_regularizer=l2(0.01))(effnet_x)
effnet_x = Dropout(0.5)(effnet_x)

effnet_predictions = Dense(5, activation='softmax')(effnet_x)
effnet_model = Model(inputs=effnet_base_model.input, outputs=effnet_predictions)

nasnet_base_model = NASNetMobile(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
nasnet_x = nasnet_base_model.output
nasnet_x = GlobalAveragePooling2D()(nasnet_x)

# nasnet_x = Dense(1024, activation='relu')(nasnet_x)
nasnet_x = Dense(1024, activation='relu', kernel_regularizer=l2(0.01))(nasnet_x)
nasnet_x = Dropout(0.5)(nasnet_x)

nasnet_predictions = Dense(5, activation='softmax')(nasnet_x)
nasnet_model = Model(inputs=nasnet_base_model.input, outputs=nasnet_predictions)

For Data Augmentation:

from keras.preprocessing.image import ImageDataGenerator

# Data augmentation
datagen = ImageDataGenerator(
    rotation_range=30,           
    width_shift_range=0.3,       
    height_shift_range=0.3,      
    shear_range=0.3,             
    zoom_range=0.3,              
    horizontal_flip=True,
    vertical_flip=True,          
    brightness_range=(0.7, 1.3), 
    channel_shift_range=20,      
    fill_mode='reflect'          
)

datagen.fit(x_train)
from keras.layers import GlobalAveragePooling2D, Dense, Concatenate, Average
# Blending the models
averaged_predictions = Average()([effnet_model.output, nasnet_model.output])
blended_model = Model(inputs=[effnet_model.input, nasnet_model.input], outputs=averaged_predictions)

# Compile the blended model
blended_model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
# Train the blended model
history = blended_model.fit(
    datagen.flow([x_train, x_train], y_train, batch_size=2),
    steps_per_epoch=len(x_train) / 2,  # Adjust this based on your batch size
    validation_data=([x_test, x_test], y_test),
    epochs=15
)

The output I am currently getting:

Epoch 1/15
1647/1647 [==============================] - 332s 175ms/step - loss: 7.1797 - accuracy: 0.6895 - val_loss: 1.8484 - val_accuracy: 0.6757
Epoch 2/15
1647/1647 [==============================] - 278s 169ms/step - loss: 1.2030 - accuracy: 0.7408 - val_loss: 0.9885 - val_accuracy: 0.7439
Epoch 3/15
1647/1647 [==============================] - 280s 170ms/step - loss: 0.9069 - accuracy: 0.7554 - val_loss: 0.8860 - val_accuracy: 0.7575
Epoch 4/15
1647/1647 [==============================] - 280s 170ms/step - loss: 0.8398 - accuracy: 0.7797 - val_loss: 0.9231 - val_accuracy: 0.7302
Epoch 5/15
1647/1647 [==============================] - 281s 170ms/step - loss: 0.7956 - accuracy: 0.8033 - val_loss: 0.8378 - val_accuracy: 0.7766
Epoch 6/15
1647/1647 [==============================] - 278s 169ms/step - loss: 0.7555 - accuracy: 0.8212 - val_loss: 0.9398 - val_accuracy: 0.7139
Epoch 7/15
1647/1647 [==============================] - 280s 170ms/step - loss: 0.7155 - accuracy: 0.8370 - val_loss: 0.8971 - val_accuracy: 0.7221
Epoch 8/15
1647/1647 [==============================] - 278s 169ms/step - loss: 0.6638 - accuracy: 0.8513 - val_loss: 1.0816 - val_accuracy: 0.6676
Epoch 9/15
1647/1647 [==============================] - 278s 169ms/step - loss: 0.6362 - accuracy: 0.8659 - val_loss: 0.9938 - val_accuracy: 0.6948
Epoch 10/15
1647/1647 [==============================] - 280s 170ms/step - loss: 0.5895 - accuracy: 0.8813 - val_loss: 0.9505 - val_accuracy: 0.7003
Epoch 11/15
1647/1647 [==============================] - 277s 168ms/step - loss: 0.5531 - accuracy: 0.8962 - val_loss: 0.9685 - val_accuracy: 0.7711
Epoch 12/15
1647/1647 [==============================] - 279s 169ms/step - loss: 0.5443 - accuracy: 0.9108 - val_loss: 1.4665 - val_accuracy: 0.5640
Epoch 13/15
1647/1647 [==============================] - 290s 176ms/step - loss: 0.5180 - accuracy: 0.9247 - val_loss: 1.4458 - val_accuracy: 0.5559
Epoch 14/15
1647/1647 [==============================] - 285s 173ms/step - loss: 0.5145 - accuracy: 0.9253 - val_loss: 0.8927 - val_accuracy: 0.7847
Epoch 15/15
1647/1647 [==============================] - 288s 175ms/step - loss: 0.5029 - accuracy: 0.9293 - val_loss: 1.1441 - val_accuracy: 0.6158

0 Answers0