0

I'm using deep learning approach to address a regression problem with multi outputs (16 outputs), each output is between [0,1] and the sum is 1. I am confused about which loss function is ideal to this problem, I have already test Mean squared error and Mean Absolute Error but Neural network predicts always the same value.

model = applications.VGG16(include_top=False, weights = None, input_shape = (256, 256, 3))

x = model.output
x = Flatten()(x)
x = Dense(1024)(x)
x=BatchNormalization()(x)
x = Activation("relu")(x)
x = Dropout(0.5)(x)
x = Dense(512)(x)
x=BatchNormalization()(x)
x = Activation("relu")(x)
x = Dropout(0.5)(x)

predictions = Dense(16,activation="sigmoid")(x)


model_final = Model(input = model.input, output = predictions)


model_final.compile(loss ='mse', optimizer = Adam(lr=0.1), metrics=['mae'])

1 Answers1

4

What you are describing sounds more like a classification task, since you want to get a probability distribution at the end. Therefore you should use a softmax (for example) in the last layer and cross-entropy as loss measure.

pafi
  • 619
  • 1
  • 8
  • 20
  • May I kindly draw your attention to a similar [question](https://stackoverflow.com/questions/71305465/how-can-plot-loss-curves-for-training-and-test-for-multi-output-regression-task). – Mario Mar 01 '22 at 08:15