8

I compared the scikit-learn Min-Max scaler from its preprocessing module with a "manual" approach using NumPy. However, I noticed that the result is slightly different. Does anyone have a explanation for this?

Using the following equation for Min-Max scaling:

enter image description here

which should be the same as the scikit-learn one: (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))

I am using both approaches as follows:

def numpy_minmax(X):
    xmin =  X.min()
    return (X - xmin) / (X.max() - xmin)

def sci_minmax(X):
    minmax_scale = preprocessing.MinMaxScaler(feature_range=(0, 1), copy=True)
    return minmax_scale.fit_transform(X)

On a random sample:

import numpy as np

np.random.seed(123)

# A random 2D-array ranging from 0-100

X = np.random.rand(100,2)
X.dtype = np.float64
X *= 100 

The results are slightly different:

from matplotlib import pyplot as plt

sci_mm = sci_minmax(X)
numpy_mm = numpy_minmax(X)

plt.scatter(numpy_mm[:,0], numpy_mm[:,1],
        color='g',
        label='NumPy bottom-up',
        alpha=0.5,
        marker='o'
        )

plt.scatter(sci_mm[:,0], sci_mm[:,1],
        color='b',
        label='scikit-learn',
        alpha=0.5,
        marker='x'
        )

plt.legend()
plt.grid()

plt.show()

enter image description here

1 Answers1

14

scikit-learn processes each feature individually. So, you need to specify axis=0 when taking min, otherwise numpy.min would be the min on all the elements of the array, not each column separately:

>>> xs
array([[1, 2],
       [3, 4]])
>>> xs.min()
1
>>> xs.min(axis=0)
array([1, 2])

same thing for numpy.max; so the correct function would be:

def numpy_minmax(X):
    xmin =  X.min(axis=0)
    return (X - xmin) / (X.max(axis=0) - xmin)

Doing so you will get an exact match:

exact match

behzad.nouri
  • 74,723
  • 18
  • 126
  • 124