0

I was going through the documentation on Keras alongside previous questions and responses here on StackOverFlow. Currently, this is what I have so far:

#Creating base pre-trained model (Default Input Size for ResNet50 is (299, 299))
base_model = InceptionV3(weights = 'imagenet', include_top = False, input_shape = (299, 299, 3))


#Adding a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)

#Adding a fully-connected dense layer
x = Dense(1024, activation = 'relu')(x)

#Adding a logistic layer - We have 2 classes: Cats and Dogs
predictions = Dense(2, activation = 'softmax')(x)

#Model to be trained
model = Model(inputs = base_model.input, outputs = predictions)

#First trains only the top layers
#Freeze all convolutional ResNet50 layers
for layer in base_model.layers:
    layer.trainable = False

#Compile the model
model.compile(optimizer = 'rmsprop', loss = 'sparse_categorical_crossentropy')

#Train the model on the new data
train_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory("./data/train",
                                                    target_size = (299, 299),
                                                    batch_size = 25,
                                                    class_mode = 'binary')
model.fit_generator(train_generator, steps_per_epoch = 10)

#Train top 2 inception blocks by freezing first 249 layers and unfreezing the rest
for layer in model.layers[:249]:
    layer.trainable = False
for layer in model.layers[249:]:
    layer.trainable = True

#Now to recompile the model for modifications to take effect, use SGD
model.compile(optimizer = SGD(lr = 0.0001, momentum = 0.9), loss = 'sparse_categorical_crossentropy')

#Train model again, fine-tuning top 2 inception blocks alongside top Dense layers
model.fit_generator(train_generator, steps_per_epoch = 10)

#Test model
imges = glob.glob('./test/*.jpg')
for fname in imges:
    img_path = fname
    img = image.load_img(img_path, target_size=(299, 299))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)

    pred = model.predict(x)

    print('Predicted:', decode_predictions(pred, top=3)[0])

I'm currently getting an error that says "Line 82 in decode_predictions, Value Error: 'decode_predictions' expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 2)"

Is there something I forgot to do? I'm currently at my wits end. Any and all help would be appreciated. Thank you in advance

CodingPoding
  • 33
  • 2
  • 9
  • I've spent a good 3-4 days on this and I haven't been able to properly get it to work. Any and all help would be greatly appreciated – CodingPoding Dec 10 '17 at 22:49
  • Where is `image` defined `img = image.load_img(img_path, target_size=(299, 299))` – Charles Green Dec 13 '17 at 15:09
  • My image is defined in my test folder. I get the feeling my only issue is with decode predictions and that everything should work so as long as I remove that – CodingPoding Dec 14 '17 at 19:57

0 Answers0