0

I am training MobileNet_v1_1.0_224 using TensorFlow. I am using the python scripts present in the TensorFlow-Slim image classification model library for training. My dataset distribution with 4 classes is as follows:

normal_faces: 42070
oncall_faces: 13563 (People faces with mobile in the image when they're on call)
smoking_faces: 5949
yawning_faces: 1630

All images in the dataset are square images and larger than 224x224

I am using train_image_classifier.py to train the model with following arguments,

python train_image_classifier.py \
    --train_dir=${TRAIN_DIR} \
    --dataset_name=custom \
    --dataset_split_name=train \
    --dataset_dir=${DATASET_DIR} \
    --model_name=mobilenet_v1 \
    --batch_size=32\
    --max_number_of_steps=25000

After training the model, eval_image_classifier.py shows an accuracy greater than 95% on Validation set but when I exported the frozen graph and used it for predictions, it performs very poorly.

I have also tried this notebook but this also produced similar results.

Log: Training Log
Plots: Loss and Accuracy

What is the reason for this? How do I fix this issue?

I have seen similar issues on SO but nothing related to MobileNets specifically.

user90123
  • 1
  • 1

1 Answers1

0

Did you use a validation set? If so what was the validation accuracy? If you used a validation set a good way to check if you are doing predictions properly is to run model.predict on the validation set.

Gerry P
  • 7,662
  • 3
  • 10
  • 20
  • I have updated the question. The validation is split of 10% of the dataset. The accuracy I mentioned in the question is Validation accuracy. – user90123 Jul 18 '21 at 07:40