i am making a mode that the prediction is a metrix from a conv layer. my loss function is
def custom_loss(y_true, y_pred):
print("in loss...")
final_loss = float(0)
print(y_pred.shape)
print(y_true.shape)
for i in range(7):
for j in range(14):
tl = float(0)
gt = y_true[i,j]
gp = y_pred[i,j]
if gt[0] == 0:
tl = K.square(gp[0] - gt[0])
else:
for l in range(5):
tl = tl + K.square(gp[l] - gt[l])/5
final_loss = final_loss + tl/98
return final_loss
the shapes that printed out from the arguments are
(?, 7, 14, 5)
(?, ?, ?, ?)
the labels are in the shape of 7x14x5.
it seems like the loss function gets called for a list of predictions instead of one prediction at a time. I am relatively new to Keras and don't really understand how these things work.
this is my model
model = Sequential()
input_shape=(360, 640, 1)
model.add(Conv2D(24, (5, 5), strides=(1, 1), input_shape=input_shape))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(48, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(48, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(24, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(5, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.compile(
optimizer="Adam",
loss=custom_loss,
metrics=['accuracy'])
print(model.summary())
I am getting an error like
ValueError: slice index 7 of dimension 1 out of bounds. for 'loss/max_pooling2d_5_loss/custom_loss/strided_slice_92' (op: 'StridedSlice') with input shapes: [?,7,14,5], [2], [2], [2] and with computed input tensors: input[1] = <0 7>, input[2] = <1 8>, input[3] = <1 1>.
I think I know this is because of the arguments to the loss function is given in many predictions at a time with 4D.
how can I fix? is the problem in the way I assign the loss function or in the loss function. for now, the output of the loss function is a float. but what is it supposed to be.