0

I used a deep neural network (VGG16) for texture images classification. I had to train the whole network from scratch to obtain a good accuracy, since the network is pretrained to recognize object images. After training, I obtained 90% as validation accuracy. To my best knowledge, keras computes the accuracy by checking whether the class that has the highest value in the image probability vector is the correct class. I did the same to compute the accuracy on test data, surprisingly, it was very low 30%. I thought test data was different from validation data. Thus, I recomputed the accuracy on validation data in the same way as keras, and the accuracy was around 30%. Note that after training the model, I saved the model's weights. After that I created a new model, loaded the weights and compiled it:

vgg16_model = VGG16(weights=None, include_top=False, input_shape=(224,224,3))
model = Sequential()
model.add(vgg16_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(classesNb, activation='softmax'))
model.load_weights(trainedModelsDir + modelName)
model.compile(loss='categorical_crossentropy', optimizer= optimizers.sgd(lr = 1e-4, momentum = 0.9), 
metrics = ['accuracy'])
model.save(compiledModelsDir + modelName)

After that, I computed the accuracy on test/validation data:

global cpArray
classesProbabilities =[] 
model = load_model(compiledModelsDir +'model1.h5')

classIdx = -1
crrctPred = 0

for subdir, dirs, files in sorted(os.walk(splittedDatasetDir +'val/')):
   for imageName in sorted(files):
       imagePath = subdir + os.sep + imageName
       img = image.load_img(imagePath, target_size=(224,224))
       img = image.img_to_array(img)
       img = np.expand_dims(img, axis=0)
       img = preprocess_input(img)
       y=model.predict(img)[0]
       classesProbabilities.append(y) 
       if y[classIdx] == np.amax(y):
          crrctPred += 1   
   cpArray = np.array(classesProbabilities) 
   classIdx += 1 

classAcc = crrctPred / len(classesProbabilities)

I put sorted to have the classes in the same order as in the class probability vector (I used flow_from_directory during training which takes classes from the directory in an alphabetical order), and set classIdx to -1, so in the loop is starts from 0. Also note, that the dataset I am using is very small (250 for training, 125 for testing and 125 for validation). I think the predictions while training the model may vary slightly after loading the weights in a new created model and doing the prediction again. So, is this the source of the error? I also noticed that the misclassified samples were classified as belonging to a similar class to the true class. However, this is a weird because the accuracy is very low.

Safa
  • 485
  • 2
  • 5
  • 24

0 Answers0