I am using an autoencoder as a dimensionality reduction technique to use the learned representation as the low dimensional features that can be used for further analysis.
The code snippet:
# Note: implementation --> based on keras
encoding_dim = 32
# Define input layer
X_input = Input(shape=(X_train.shape[1],))
# Define encoder:
encoded = Dense(encoding_dim, activation='relu')(X_input)
# Define decoder:
decoded = Dense(X_train.shape[1], activation='sigmoid')(encoded)
# Create the autoencoder model
AE_model = Model(X_input, decoded)
#Compile the autoencoder model
AE_model.compile(optimizer='adam', loss='mse')
#Extract learned representation
learned_feature = Model(X_input, encoded)
history = AE_model.fit(X_train, X_train, epochs=10, batch_size=32)
I was looking for a way to measure the quality of the learned representation. I found that one way is to measure the reconstruction error. I use the following code to do so:
import math
reconstr_error = AE_model.evaluate(X_train, X_train, verbose=0)
print('The reconstruction error: %.2f MSE (%.2f RMSE)' % (reconstr_error , math.sqrt(reconstr_error )))
I got 0.00 MSE (0.05 RMSE) as the result. Yet, I am not sure whether the code above is correct or not as to measure the reconstruction error ?. Also, if there is an alternative way to do so, could you please let me know.