I am using the library scikit-learn
to perform Ridge Regression with weights on individual samples. This can be done by: esimator.fit(X, y, sample_weight=some_array)
. Intuitively, I expect that larger weights mean larger relevance for the corresponding sample.
However, I tested the method above on the following 2-D example:
from sklearn import linear_model
import numpy
import matplotlib.pyplot as plt
#Data
x= numpy.array([[0], [1],[2]])
y= numpy.array([[0], [2],[2]])
sample_weight = numpy.array([1,1, 1])
#Ridge regression
clf = linear_model.Ridge(alpha = 0.1)
clf.fit(x, y, sample_weight = sample_weight)
#Plot
xp = numpy.linspace(-1,3)
yp=list()
for x_i in xp:
yp.append(clf.predict(x_i)[0,0])
plt.plot(xp,yp)
plt.hold(True)
x = list(x)
y = list(y)
plt.plot(x,y,'or')
I run this code, and I run it again doubling the weight of the first sample:
sample_weight = numpy.array([2,1, 1])
The resulting lines get away from the sample that has larger weight. This is counter-intuitive since I expect that the sample with larger weight has larger relevance.
Am I using wrongly the library, or is it there an error in it?