1

I'm learning how to work with Keras with TF backend for image recognition so I'm still not sure about what am I doing wrong here.

I'm trying to stack 2 models, one being VGG16, and the other one being a random one I made just to learn how to stack it. I want to classify an image among 5 classes.

The problem is in the last part, when I run fit_generator. Instead of yielding a valid tuple, it's yielding what it looks like to be a list. I've seen a lot of people getting similar problems, but in their cases, the output was None, so I'm not sure if the solution would be the same.

Parameters

nb_train_samples = 576
nb_validation_samples = 144
epochs = 30
batch_size = 12
img_width, img_height = 150, 150

Generators

train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=50,
    width_shift_range=0.3,
    height_shift_range=0.3,
    shear_range=0.4,
    zoom_range=0.4,
    horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode=None,
    shuffle=False)

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode=None,
    shuffle=False)

My model

input = Input(batch_shape=model.output_shape)
x = Flatten()(input)
x = Dense(256, activation='relu', name="new_block_1")(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu', name="new_block_2")(x)
x = Dropout(0.5)(x)
x = Dense(5, activation='softmax', name="new_block_3")(x)
top_model = Model(input,x)

input = Input(shape=(img_width, img_height, 3))
x = model(input)
x = top_model(x)
final_model = Model(input, x)

final_model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy', metrics=['accuracy'])

Fit and Error

final_model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

ValueError: output of generator should be a tuple `(x, y, sample_weight)` or `(x, y)`. Found: [[[[ 0.89411771  0.89019614  0.87450987]
   [ 0.89411771  0.89019614  0.87450987]
   [ 0.89411771  0.89019614  0.87450987]
   ..., 

Update 1: as per @petezurich's tip, changed the activation function from 'sigmoid' to 'softmax'

htcoelho
  • 65
  • 7
  • This might not directly refer to your question, but is there a specific reason why you have a sigmoid on your last dense layer rather than a softmax? – petezurich Jun 19 '17 at 19:19
  • no reason, actually, but might as well try changing it. I was just trying stuff out, so I might have copy-pasted it without noticing – htcoelho Jun 19 '17 at 19:49
  • 1
    Softmax is the right choice in your case, since it gives you the probability of each of your 5 classes. Sigmoid would be correct for a binary classification. – petezurich Jun 19 '17 at 19:51
  • 1
    I see. thank you for the tip :) anyway, I changed it to softmax, but sadly, the problem persists – htcoelho Jun 19 '17 at 19:53
  • 1
    How do you get your labels? The easy way would be to put your images in subfolders for each class and set the `class_mode` in your generators to `True`. The generator than nicely derives the class labels from that. – petezurich Jun 19 '17 at 20:06
  • 1
    there was no True, but there was a 'categorical' value, which actually worked. feel free to submit it as an answer, and I'll accept it. thank you very much :) – htcoelho Jun 19 '17 at 20:13

1 Answers1

1

Your model is missing labels for training.

Simply set the class_mode to categorical in your generators and put your images in subfolders for each class. The generator than nicely derives the class labels from that.

petezurich
  • 9,280
  • 9
  • 43
  • 57