I was running into this issue where my model converge very fast only after about 20 or 30 epoch My data set contain 7000 sample and my neural network has 3 hidden layer, each with 18 neurons and batch normalization with drop out 0.2.
My task is a multi label classification where my label are [0 0 1] , [0 1 0], [1 0 0] and [0 0 0]
num_neuron = 18
model = Sequential()
model.add(Dense(num_neuron, input_shape=(input_size,), activation='elu'))
model.add(Dropout(0.2))
model.add(keras.layers.BatchNormalization())
model.add(Dense(num_neuron, activation='elu'))
model.add(Dropout(0.2))
model.add(keras.layers.BatchNormalization())
model.add(Dense(num_neuron/3, activation='elu'))
model.add(Dropout(0.2))
model.add(keras.layers.BatchNormalization())
model.add(Dense(3, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
history = model.fit(X_train, Y_train,batch_size=512 ,epochs=1000,
validation_data=(X_test, Y_test), verbose=2)
I was wondering if there is anything I can do to improve more because even after I set put for 1000 epoch, nothing would really change