0

I have images with shape (3600, 3600, 3). I'd like to use an autoencoder on them. My code is:

from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator


input_img = Input(shape=(3600, 3600, 3))  

x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)



x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')




batch_size=2


datagen = ImageDataGenerator(rescale=1. / 255)

# dimensions of our images.
img_width, img_height = 3600, 3600

train_data_dir = 'train'
validation_data_dir = validation




generator_train = datagen.flow_from_directory(
        train_data_dir,
        target_size=(img_width, img_height),
        )



generator_valid = datagen.flow_from_directory(
        validation_data_dir,
        target_size=(img_width, img_height),
        batch_size=batch_size,
        class_mode=None,
        shuffle=False)



autoencoder.fit_generator(generator=generator_train,
            validation_data = generator_valid,
            )

When I run the code I get this error message:

ValueError: Error when checking target: expected conv2d_21 to have 4 dimensions, but got array with shape (26, 1)

I know the problem is somewhere in the shape of the layers, but I couldn't find it. Can someone please help me and explain the solution?

today
  • 32,602
  • 8
  • 95
  • 115
hk_03
  • 192
  • 3
  • 12
  • 1
    My guess is that your data generator is giving the dataset for typical classification problems; I.e. `X` for image array and `y` for classes. But your autoencoder requires image array for `y` as well. – Kota Mori Sep 26 '18 at 16:10
  • That could be the problem. Can you provide a code example which solves it? – hk_03 Sep 26 '18 at 16:17
  • Unfortunately I don't have time, but you can refer to https://github.com/keras-team/keras/issues/3923. The comment by robertomest on Oct 3, 2016 looks promising. – Kota Mori Sep 26 '18 at 16:56

1 Answers1

1

There are the following issues in your code:

  1. Pass class_mode='input' to flow_from_directory method to give input images as the labels as well (since you are creating an autoencoder).

  2. Pass padding='same' to the third Conv2D layer in the decoder:

    x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
    
  3. Use three filers in the last layer since your images are RGB:

    decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x)
    
today
  • 32,602
  • 8
  • 95
  • 115
  • Thank you. I found the answer here too: https://github.com/keras-team/keras/issues/3923 – hk_03 Sep 26 '18 at 17:10
  • @hk_03 If the answer resolved your issue, kindly *accept* it by clicking on the checkmark next to the answer to mark it as "answered" - see [What should I do when someone answers my question?](https://stackoverflow.com/help/someone-answers) – today Oct 20 '18 at 15:19