-1

I try to use U-net to do the semantic segmentation problem. The mask image is binary. But when training, I find out that my loss is negative. Loss here I use binary_crossentropy. Here is my code:

X_train = X_train /255
y_train = y_train /255
X_val = X_val/255
y_val = y_val/255

All of them have type np.float32

Then I use the imageDataGenerator to augment the image, the code is below:

def image_augmentation(X_train,y_train):
    # Set hyper parameters for the model.
    data_gen_args = dict(featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90.,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2,
                         horizontal_flip=True, 
                         vertical_flip=True)
    image_datagen = ImageDataGenerator(**data_gen_args)
    mask_datagen = ImageDataGenerator(**data_gen_args)

    seed = 42
    image_datagen.fit(X_train, augment=True, seed=seed)
    mask_datagen.fit(y_train, augment=True, seed=seed)

    image_generator = image_datagen.flow(
                         X_train,batch_size=8,
                         seed=seed)

    mask_generator = mask_datagen.flow(
                         y_train, batch_size=8,
                         seed=seed)

    while True:
         yield(image_generator.next(),mask_generator.next())


train_generator = image_augmentation(X_train,y_train)

pat_init = 50
pat = pat_init
learning_rate = 1e-4
##change the model weight you want
file_path = "./model_v1/improvement-{epoch:02d}-{val_my_iou_metric:.5f}.hdf5"
checkpoint = ModelCheckpoint(file_path,monitor = 'val_my_iou_metric',verbose=1,save_best_only=True,mode='max')
reduce_lr = ReduceLROnPlateau(monitor='val_loss', mode = 'auto',factor=0.5, patience=5, min_lr=1e-9, verbose=1)
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=learning_rate), metrics=[my_iou_metric])

# Use the image data Augment below to achieve better result
model.fit_generator(
        train_generator,steps_per_epoch=2000,epochs=300,
        validation_data=(X_val, y_val), verbose=1,
        callbacks=[checkpoint,reduce_lr]
        )

The last layer of My net is defined as follow:

output = Conv2D(1,activation='sigmoid',
                            kernel_size=(1,1), 
                            padding='same',
                            data_format='channels_last')(x)

I am really curious about why this will happened? Does the 'sigmoid' function have the output between 0 and 1?

If you have some idea, please discuss with me. Thanks a lot!

Jiageng Zhu
  • 39
  • 1
  • 4

1 Answers1

-1
samplewise_center=True,
samplewise_std_normalization=True

in imagedatagenerator

App Dev Guy
  • 5,396
  • 4
  • 31
  • 54
李云飞
  • 11
  • 3
  • 2
    While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find [how to write a good answer](https://stackoverflow.com/help/how-to-answer) very helpful. Please [edit] your answer - [From Review](https://stackoverflow.com/review/late-answers/21982773) – Nick Jan 21 '19 at 02:56
  • While, I found another way to solve it. It may be only a binary classification but your purpose may not be so. Try a binary classification. – 李云飞 Jan 22 '19 at 00:53