1

I am training a U-NET model on 238 satellite images. my val_loss is not decreasing below 0.3, despite of the different architectures that I tried.

  • Conv2D(8-16-32-64-128-64-32-16-8)
  • Conv2D(16-32-64-128-256-128-64-32-16)
  • Conv2D(32-64-128-256-512-256-128-64-32)
  • activation function = relu
  • sigmoid (outputs)
  • validation_split=0.10,batch_size=10, epochs=30)
  • loss='binary_crossentropy'
  • optimizers.Adam(learning_rate=0.001) -- also i try 0.01 and 0.0001

if you have a lead I'm interested

UpDate = i have 968 images now

Nick ODell
  • 15,465
  • 3
  • 32
  • 66
Elouafi
  • 21
  • 3
  • Could you please give some information about input data shape. As I understand it properly, your validation dataset contains only 23 images. Did you try to use different optimizers or schedulers? Maybe it will be a good idea to transform your (image, mask) pairs to make your model more robust on validation dataset. – s3nh Jun 23 '20 at 08:29
  • thank you for your feedback. My data includes (238 images including 23 set validation). They are transformed into 128 * 128px before being used by the model. For the optimizers I used that "adam" with different learning rate. (0.01--0.00001). For your information I try to predict only one class. (1 for my class 0 for background, that why i used binary_crossentropy – Elouafi Jun 23 '20 at 14:02
  • You have very little data, you should increase your dataset and add image augmentations. You're also possibly making this even more complicated by resizing images to 128x128 (depends on your task). Try to maximize your data first on a baseline model before tweaking any parameters. – jwitos Jun 23 '20 at 19:40
  • @jwitos actually I have added other images, I'm at 968 images (128 * 128 px), I'm testing now with the optimizer SGD, but apparently it doesn't change anything ... my val_loss is still very high..... could you please explain what do you mean by (You're also possibly making this even more complicated by resizing images to 128x128 (depends on your task)) ** my goal is to do image segmentation: * input for training = RGB image + binary mask ** after creating the model ... my goal is to pass a satellite image of any size to the model to do the segmentation. – Elouafi Jun 23 '20 at 20:52
  • @Elouafi Is your val loss decreasing at all? Regarding resizing: when you resize images, you lose some of the features. E.g. when you resize high-res image to 32x32 square, you will definitely most of the features. You need to look at your data and decide whether you are losing important features when resizing to 128x128. Also, look for data augmentation as I mentioned before. – jwitos Jun 23 '20 at 22:32
  • @jwitos : indeed, my images (40% already have a dimension of 128 * 128) the rest 256 * 256 - but before the training I transform them into 128 * 128 .. the val_loss decreases at the beginning (with the new images, it starts from 0.69 and stops at 0.54) – Elouafi Jun 23 '20 at 23:06
  • How difficult is your task, i.e. what are the segmentation mask representing? Are you able to overfit the network (do you get good prediction results on train set)? If not, I would suspect a problem with your code; if yes -- my bet is still on your small data size and would suggest to try data augmentation. – jwitos Jun 24 '20 at 04:41
  • @jwitos indeed, it is difficult, the objective is to detect slums from satellite images of a city using U-net....so I have RGB images (it is the result of splitting a large satellite image(9000*9000px to 128*128)... and their masks; the masks are white and black, white represents the slums and black the background. (same size as the pictures).... the accuracy varies between 065 and 0.79 at best. – Elouafi Jun 24 '20 at 10:04

0 Answers0