0

I want to train my model using the whole training data simultaneously, i.e. without using batches. However, when I fit the model, the output in the console displays (25/25) which indicates (to my knowledge) the use of 25 batches. X_train is (3967, 7), y_train is (3967, 3), X_test is (793, 7) and y_test is (793, 3)

model.fit(X_train,
          y_train,
          epochs=1000,
          batch_size=X_train.shape[0],
          validation_data=(X_test, y_test),
          callbacks=[callback],
          validation_steps=1,
          steps_per_epoch=1,
          verbose=0,
 )

25/25 [==============================] - 0s 680us/step - loss: 0.0021 - mean_squared_error: 1.8719e-04 - root_mean_squared_error: 0.0137 - mean_absolute_error: 0.0077

I also tried to fit the model without using the parameters validation_steps=1 and steps_per_epoch=1 (ignoring them and sticking to the default value) which didn't solve the issue.

I use tensorflow 2.3.0

Michael
  • 357
  • 1
  • 13
  • I don't think the terminal is showing an incorrect value, but maybe the batch size you are using is not correct, for example if x_train and y_train are tf datasets, then they have their own batch sizes. – Dr. Snoopy Nov 15 '21 at 21:54
  • Thanks, but x_train and y_train are numpy arrays. I just use their length as batch size to pass the full training set – Michael Nov 15 '21 at 21:57
  • Validation steps and steps per epoch are only used with generators or tf datasets. Does the number of batches change with the batch size? – Dr. Snoopy Nov 15 '21 at 21:58
  • No it does not change, but regardless of whether I use validation steps and steps per epoch – Michael Nov 15 '21 at 22:01
  • I am talking about batch size, forget about steps completely. Maybe put a reproducible example in your question. – Dr. Snoopy Nov 15 '21 at 22:04

0 Answers0