2

I have used transfer learning dataset of 47 images with class 0 and 57 images with class 1 over xception model with as follows.

    base_model = Xception(weights='imagenet',include_top=False,input_shape=(299, 299, 3))
    headModel = base_model.output
    headModel = AveragePooling2D(pool_size=(4, 4))(headModel)
    headModel = Flatten(name="flatten")(headModel)
    headModel = Dense(128, activation="relu")(headModel)
    headModel = Dropout(0.5)(headModel)
    headModel = Dense(2, activation="softmax")(headModel)
    model = Model(inputs=base_model.input, outputs=headModel)
    for layer in base_model.layers:
    layer.trainable = False

    opt = optimizers.Adam(lr=1e-4)
    model.compile(loss="binary_crossentropy", optimizer=opt,metrics=["accuracy"])

Now I have dataset of 104 images passed as follows

train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=30,
    zoom_range=0.15,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.15,
    horizontal_flip=True,
    fill_mode="nearest")
train_generator = train_datagen.flow_from_directory('/content/CROPPED_train',target_size=(299,299),batch_size = 16,seed=np.random.seed())

Then after 50 epoch I got the accuracy as 97-98%

history = model.fit_generator(
    train_generator,
    epochs=5,
    steps_per_epoch=total_train//16)

But when I do prediction on the training set then for both the classes 0 and 1 it is predicting 0 only

import cv2
import matplotlib.pyplot as plt
import numpy as np
img = cv2.imread("13.png")
img  = cv2.resize(img,(299,299))
plt.imshow(img)
plt.show()
preds = model.predict(np.expand_dims(img, axis=0))[0]
#y= model.predict(img[np.newaxis,...])
i = np.argmax(preds)
print(i)

So here if model is underfit then it should not have to give accuracy during epoch while it is giving good accuracy during epoch and if overfit then it should have to correctly predict trained dataset. So please tell what is the problem. For predicting training dataset I am getting 53% accuracy by predicting one or two correct label of class 1.

But for the same when I have done transfer learning over VGG16 then it is perfectly predicting.

KD.
  • 61
  • 6
  • Use the same preprocessing steps in the test phase, i.e. divide image pixel values by 255. – today Apr 21 '20 at 18:46

0 Answers0