0

I am preparing input to feed into a Keras Neural network for a multiclass problem as:

encoder = LabelEncoder()
encoder.fit(y)
encoded_Y = encoder.transform(y)

# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = np_utils.to_categorical(encoded_Y)

X_train, X_test, y_train, y_test = train_test_split(X, dummy_y, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.06, random_state=42)

After having trained the model, I try to run the following lines to obtain a prediction that reflects the original class names:

y_pred = model.predict_classes(X_test)
y_pred = encoder.inverse_transform(y_pred)

y_test = np.argmax(y_test, axis = 1)
y_test = encoder.inverse_transform(y_test)

Training Plot

However, I obtain surpisingly low levels of accuracy (0.36), as oppoes to training and validations, that reach 0.98. Is this the right way of transforming classes back into the original labels?

I compute accuracies as:

# For training
history.history['acc']

# For testing
accuracy_score(y_test, y_pred)
Alessandro Ceccarelli
  • 1,775
  • 5
  • 21
  • 41

0 Answers0