I am quite new to Keras and deep learning and I have been wanting to print outputs of a section of my layers (named output[x]
)
Down below you can find a section of the architecture. Do note that I have not provided any reproducible code.
The goal is to validate the val_loss
function. This concerns a categorical cross-entropy function where the formula (simplified) is stated as:
L=−y⋅log(ŷ)
Where L is given when I run the architecture as the model outputs the L
. The y
is my ground truth, and ŷ
the estimation.
Goal: Print the values of output1, output2, output3, output4 and output5 so I will know what my ŷ
is. With these given variables I can then validate the formula used.
layer_1 = Conv2D(filters[0], kernel_size[0], activation='relu', strides=strides)(inputs)
layer_11 = MaxPooling2D(pool_size=pool_size, strides=maxp_strides, padding='valid')(layer_1)
layer_2 = Conv2D(filters[1], kernel_size[1], activation='relu', strides=strides, padding='same')(layer_11)
layer_3 = Conv2D(filters[2], kernel_size[2], strides=strides, activation='relu', padding='same')(layer_2)
layer_4 = Conv2D(filters[3], kernel_size[3], strides=strides, activation='relu', padding='same')(layer_3)
layer_5 = Conv2D(filters[4], kernel_size[4], strides=strides, activation='relu', padding='same')(layer_4)
layer_6 = Flatten()(layer_5)
layer_7 = Dense(units[0], activation='relu')(layer_6)
layer_7 = Dropout(dropout)(layer_7)
layer_8 = Dense(units[1], activation='relu')(layer_7)
layer_8 = Dropout(dropout)(layer_8)
output1 = Dense(output_class, activation='softmax')(layer_8)
output2 = Dense(output_class, activation='softmax')(layer_8)
output3 = Dense(output_class, activation='softmax')(layer_8)
output4 = Dense(output_class, activation='softmax')(layer_8)
output5 = Dense(output_class, activation='softmax')(layer_8)
rms = optimizers.RMSprop(lr=lr, rho=rho, epsilon=epsilon, decay=decay)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=factor, patience=patience, min_lr=min_lr)
early_stopping = EarlyStopping(monitor='val_loss', patience=ES_patience)
model = Model(inputs=inputs, outputs=[output1, output2, output3, output4, output5])
model.compile(optimizer=rms, loss='categorical_crossentropy', metrics=['categorical_crossentropy'])
history = model.fit(X_train, [self.y_train[:, 0, :], self.y_train[:, 1, :], self.y_train[:, 2, :],self.y_train[:, 3, :], self.y_train[:, 4, :]],batch_size=batch_size,epochs=epoch, validation_split=val_split, verbose=verbose, callbacks=[reduce_lr])
This is what I have tried
print("outputs: {} - output1 {} - output2 {} - output3 {} - output4 {} - output5 {}".format(model.output,output1,output2,output3,output4,output5))
# outputs: [<tf.Tensor 'dense_38/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_39/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_40/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_41/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_42/Softmax:0' shape=(?, 3) dtype=float32>] - output1 Tensor("dense_38/Softmax:0", shape=(?, 3), dtype=float32) - output2 Tensor("dense_39/Softmax:0", shape=(?, 3), dtype=float32) - output3 Tensor("dense_40/Softmax:0", shape=(?, 3), dtype=float32) - output4 Tensor("dense_41/Softmax:0", shape=(?, 3), dtype=float32) - output5 Tensor("dense_42/Softmax:0", shape=(?, 3), dtype=float32)
and
print(model.layers[-1].output)
# Tensor("dense_42/Softmax:0", shape=(?, 3), dtype=float32)