I compared the scikit-learn Min-Max scaler from its preprocessing
module with a "manual" approach using NumPy. However, I noticed that the result is slightly different. Does anyone have a explanation for this?
Using the following equation for Min-Max scaling:
which should be the same as the scikit-learn one: (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
I am using both approaches as follows:
def numpy_minmax(X):
xmin = X.min()
return (X - xmin) / (X.max() - xmin)
def sci_minmax(X):
minmax_scale = preprocessing.MinMaxScaler(feature_range=(0, 1), copy=True)
return minmax_scale.fit_transform(X)
On a random sample:
import numpy as np
np.random.seed(123)
# A random 2D-array ranging from 0-100
X = np.random.rand(100,2)
X.dtype = np.float64
X *= 100
The results are slightly different:
from matplotlib import pyplot as plt
sci_mm = sci_minmax(X)
numpy_mm = numpy_minmax(X)
plt.scatter(numpy_mm[:,0], numpy_mm[:,1],
color='g',
label='NumPy bottom-up',
alpha=0.5,
marker='o'
)
plt.scatter(sci_mm[:,0], sci_mm[:,1],
color='b',
label='scikit-learn',
alpha=0.5,
marker='x'
)
plt.legend()
plt.grid()
plt.show()