0

all.

I am using transfer learning to build a new model based on my own samples. The learning framework is Keras 2.0+. I modified the codes in reference to this page: Fine-tune InceptionV3 on a new set of classes https://keras.io/applications/

Nothing goes wrong in the training step. When I test the model using test sets, every pictures gives the same predicted class though they are from different classes. Example:

>>> print(preds)
[[0.0000000e+00 4.5558951e-38 0.0000000e+00 0.0000000e+00 6.3798614e-36
  8.4623914e-22 1.0000000e+00 1.0636564e-11]]
>>> print(pred_classes)
6

I test 10 pictures of 8 classes, all gave class 6.

Any suggestions?

Training code:

from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator

base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(8, activation='softmax')(x)

model = Model(inputs=base_model.input, outputs=predictions)

for layer in base_model.layers:
    layer.trainable = False

model.compile(optimizer='rmsprop', loss='categorical_crossentropy',metrics=['accuracy'])

train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator=train_datagen.flow_from_directory('./TranningSet',
                                                 target_size=(224,224),
                                                 color_mode='rgb',
                                                 batch_size=32,
                                                 class_mode='categorical',
                                                 shuffle=True)

step_size_train=train_generator.n//train_generator.batch_size
model.fit_generator(generator=train_generator,
                   steps_per_epoch=step_size_train,
                   epochs=100,
                   use_multiprocessing=True)

Final train accuracy is a bit low, but it's around 70%

50/50 [==============================] - 297s 6s/step - loss: 4.2306 - acc: 0.7040
Epoch 99/100
50/50 [==============================] - 303s 6s/step - loss: 3.7681 - acc: 0.7387
Epoch 100/100
50/50 [==============================] - 293s 6s/step - loss: 3.7569 - acc: 0.7443
<keras.callbacks.History object at 0x7fd931756bd0>
>>>

Prediction code:

import keras
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.models import Model
import numpy as np
from keras.models import load_model
from keras.applications.inception_v3 import preprocess_input

model = load_model('/root/AIdetection/Keras/V6.5/20181217_V6.5.h5')


from keras.preprocessing import image
img =image.load_img('/root/AIdetection/Keras/V6.5/TestSet/Healthy50/20181026.06.JPG', target_size=(224, 224))
x = image.img_to_array(img)
x *= (255.0/x.max())
image = np.expand_dims(x, axis = 0)
image = preprocess_input(image)
preds = model.predict(image)
pred_classes = np.argmax(preds)
print(preds)
print(pred_classes)
Zehan Dai
  • 3
  • 2

1 Answers1

0

Is your training data balanced and are you shuffling it before training? In other words, is it possible that most of your training data is of class 6 and it is learning to simply predict 6 every time?

Also check that your test set is in the same format as your train set. Are you doing any type of image processing before passing the train data to your model?

Allen
  • 236
  • 3
  • 12