2

I have the following functions to make standardization/inversion of the input/output to a list of values. Now I want to inverse-apply it to a tensor, so that I can write loss value in "real" units. Obviously, this can't be applied to a tensor directly. Surely this can be done by manual computation, but I wonder if there are framework means of doing this? I've found tf.nn.l2_normalize but not sure this what I need, and how to invert it.

def _transform(self, x_in, y_in):
    print(x_in, y_in)
    if not hasattr(self, "x_scaler"):
        self.x_scaler = preprocessing.StandardScaler().fit(_sample_feature_matrix(x_in))
        self.y_scaler = preprocessing.StandardScaler().fit(_sample_feature_matrix(y_in))
    x_std = self.x_scaler.transform(_sample_feature_matrix(x_in))
    y_std = self.y_scaler.transform(_sample_feature_matrix(y_in))
    return x_std, y_std

def _inverse_y(self, y_std):
   return self.y_scaler.inverse_transform(_sample_feature_matrix(y_std)) 

"""Converts a list of samples with a single feature value in each to  n_samples, n_features) matrix suitable for sklearn"""
def _sample_feature_matrix(in_list):
    return np.array(in_list).reshape(-1, 1)
noname7619
  • 3,370
  • 3
  • 21
  • 26

0 Answers0