i have defined a tensorflow CNN as follows:
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
model = models.Sequential()
model.add(layers.Conv2D(1, (9, 9), activation='relu', input_shape=(153, 204,1)))
model.add(layers.MaxPooling2D((3, 3)))
model.add(layers.Conv2D(2, (9,9), activation='tanh'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(2, (9,9), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation = 'softmax'))
model.summary()
which i train using this command
model.compile(optimizer='adam',
loss=tf.keras.losses.categorical_crossentropy,
metrics=['accuracy'])
history = model.fit(image_list, behaviour, epochs=5,
validation_data=(image_list, behaviour), verbose = 1)
(It was the initial commit so i didnt want to do train-test split yet, one block at a time)
The image_list has the dimensions :(1809, 153, 204, 1) with 1809 images of 153x204x1 pixels each Behaviour can take any of the values 0,1,2
However i noticed something weird, namely during the training i get this
57/57 [==============================] - 19s 325ms/step - loss: 0.0000e+00 - accuracy: 0.2537 - val_loss: 0.0000e+00 - val_accuracy: 0.2830
Why are there 57/57. Doesnt this imply that only 57 images are taken into account? Very predictably, loss = 0 but accuracy = 30% which roughly corresponds to the percentage of the first label in the dataset (which all 57 first images share)
How can i convince it to take more into account? PS, I know about shuffling , i just want all the images in the training set to be used Thank you all for your time best