I have created a neural network with two branches, one dedicated to regression and the other one to classification.
The inputs consist of 104 columns, 52 of which are numeric positive values, and the other 52 consist of binary values (0 or 1) indicating whether the numeric value in the first 52 columns is positive or negative. In this case, the first 52 columns correspond to X_train_pca
and the remaining ones to y_train_classification_enc
.
inputs = keras.Input(shape=(X_train_pca.shape[1],))
#Regression branch
layer1_1 = Dense(500, activation='relu')(inputs)
layer1_2 = Dense(400, activation='relu')(layer1_1)
layer1_3=Dense(300, activation='relu')(layer1_2)
layer1_4=Dense(200, activation='relu')(layer1_3)
layer1_5=Dense(150, activation='relu')(layer1_4)
#Classifier branch
layer2_1=Dense(500, activation='relu')(inputs)
layer2_2=Dense(450, activation='relu')(layer2_1)
layer2_3=Dense(400, activation='relu')(layer2_2)
layer2_4=Dense(350, activation='relu')(layer2_3)
layer2_5=Dense(300, activation='relu')(layer2_4)
layer2_6=Dense(250, activation='relu')(layer2_5)
classifier_output = Dense(52, activation='sigmoid', name='classifier')(layer2_6)
regression_output = Dense(52, activation='linear', name='regression')(layer1_5)
mdl_multi_pca = Model(inputs=inputs, outputs=[classifier_output, regression_output])
mdl_multi_pca.compile(optimizer='adam',
loss={'classifier': 'binary_crossentropy', 'regression': 'mean_squared_error'},
metrics={'classifier': ['Recall',"Precision"], 'regression': 'mae'})
history = mdl_multi_pca.fit(X_train_pca, {'classifier': y_train_classification_enc, 'regression': y_train_regression}, epochs=1500,batch_size=200,verbose=2)
I am looking for a way to implement cross-validation with this structure, indicating with one score (mse), the performance of the regression and, in the case of the classifier, I would like to use the f1 score.
Since I have always been using MLPClassifier
and MLPRegressor
, I am quite lost when it comes to cross-validation with this kind of structure.