#Resizing the image to make it suitable for apply convolution
import numpy as np
img_size = 28
X_trainr = np.array(X_train).reshape(-1, img_size, img_size, 1)
X_testr = np.array(X_test).reshape(-1, img_size, img_size, 1)
# Model Compilation
model.compile(loss = "sparse_categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"])
model.fit(X_trainr, y_train, epochs = 5, validation_split = 0.2) #training the model
I loaded the MNIST dataset for digit recognizing code. Then I splited the dataset in training and test set. Then added a new dimension to the 3D training set array and named new array as X_trainr. Then I complited and fitted the model. And after fitting the model, model was not taking whole training set (42000 samples) instead it is taking only 1500 samples. I have tried : set validation_split = 0.3 then it was training on 1313 samples. Why my model is not taking whole training set(42000 samples)?
Output
Epoch 1/5
1500/1500 [==============================] - 102s 63ms/step - loss: 0.2930 - accuracy: 0.9063 - val_loss: 0.1152 - val_accuracy: 0.9649
Epoch 2/5
1500/1500 [==============================] - 84s 56ms/step - loss: 0.0922 - accuracy: 0.9723 - val_loss: 0.0696 - val_accuracy: 0.9780
Epoch 3/5
1500/1500 [==============================] - 80s 53ms/step - loss: 0.0666 - accuracy: 0.9795 - val_loss: 0.0619 - val_accuracy: 0.9818
Epoch 4/5
1500/1500 [==============================] - 79s 52ms/step - loss: 0.0519 - accuracy: 0.9837 - val_loss: 0.0623 - val_accuracy: 0.9831
Epoch 5/5
1500/1500 [==============================] - 84s 56ms/step - loss: 0.0412 - accuracy: 0.9870 - val_loss: 0.0602 - val_accuracy: 0.9818