I am currently building a neural network model using Keras' Functional API and a dataset with multiple independent variables and a single categorical target variable using the following code.
input_layer = keras.Input(shape=(89,), name="input_layer")
dense_1 = keras.layers.Dense(50, name = 'dense_1')(input_layer)
dense_2 = keras.layers.Dense(50, name = 'dense_2')(dense_1)
classification_output_1 = keras.layers.Dense(31, activation = 'softmax', name = 'classification_output_1')(dense_2)
model = keras.Model(inputs = input_layer, outputs = [regression_output_1, classification_output_1])
model.compile(
optimizer = "adam",
loss = 'sparse_categorical_crossentropy'
)
model_1 = model.fit(
X_train,
y_train["Category_Target"],
epochs = 10,
batch_size = 50,
verbose = 1
)
y_pred = model_1.predict(X_train)
y_pred = pd.DataFrame(y_pred[0])
y_pred.columns = [i for i in pd.get_dummies(y_train["Category_Target"]).columns]
As y_pred is in the form of a one-hot vector I presumed that using pd.get_dummies().columns would get me the class labels that I need. My question is, is this method reliable at all, and if not, is there any other way to get class labels for predicted values? The old method of getting labels for classes (i.e. predict_classes) is deprecated and I can't really find a reliable way to get them. I've also considered other methods such as:
y_pred.columns = list(y_train.drop_duplicates(by = "Category_Target")["Category_Target"])
Neither of the methods that I've tried seem to be a surefire way of labelling classes correctly