The fashion mnist data included in Keras.datasets
contained 2 sets of arrays (train and test) of shape (60000, 28, 28) and (10000, 28, 28) respectively. I want to fit the train dataset of shape (28, 28) to a pre-trained Keras model which requires input shape of (224, 224, 3). Instead of using sk.image.transform
, I used OpenCV to resize arrays of shape (28, 28) to shape of (224, 224). The new images of size 224x224 were the same color composition to the original images of size 28x28 (black image/object on white background). As I'm new to OpenCV, I don't know whether it has a method to expand the dimensions of the ndarrays of (224, 224) to (224, 224, 3) so I used numpy to do that as per below
# x can be obtained by loading the fashion dataset included in Keras
x = np.stack((x, )*3, axis=-1)
Curiously, the expanded-dimension images flipped the color composition of image/object and background (that is, white image/object on black background) as shown in the example below (where the original image is on the left). Do you happen to know why? Thankyou!