2

I am having a problem with the estimator.loss_ method for the sklearn Gradient Boosting Classifier. I am trying to graph the test error in comparison to the training error over time. Here is some of my data prep:

# convert data to numpy array
train = np.array(shuffled_ds)

#label encode neighborhoods
for i in range(train.shape[1]):
if i in [1,2]:
    print(i,list(train[1:5,i]))
    lbl = preprocessing.LabelEncoder()
    lbl.fit(list(train[:,i]))
    train[:,i] = lbl.transform(train[:,i])
print('neighborhoods & crimes encoded')

#create target vector
y_crimes = train[::,1]
train=np.delete(train,1,1)
print(y_crimes)

#arrays to float
train = train.astype(float)
y_crimes = y_crimes.astype(float)

#data holdout for testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
    train, y_crimes, test_size=0.4, random_state=0)
print('test data created')

#train model and check train vs test error
print('begin training...')
est=GBC(n_estimators = 3000,learning_rate=.1,max_depth=4,max_features=1,min_samples_leaf=3)
est.fit(X_train,y_train)
print('done training')

At this point when I print out my array shapes with

print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)

I get:

(18000, 9)
(12000, 9)
(18000,)
(12000,)

respectively.

So my shapes are compatible according to the sklearn documentation. But next, I try to fill a test score vector so I may graph it for comparison alongside my training error as such:

test_score=np.empty(len(est.estimators_))
for i, pred in enumerate(est.staged_predict(X_test)):
    test_score[i] = est.loss_(y_test,pred)

and I get the following error:

: operands could not be broadcast together with shapes (12000,47) (12000,) 
         return np.sum(-1 * (Y * pred).sum(axis=1) +
543    544else:ValueError

I'm not sure where that 47 is coming from. I have used this same procedure before on another dataset and had no issue. Any help would be much appreciated.

1 Answers1

0

You've issued this error because you have to pass result of staged_decision_function (instead of staged_predict) method into loss_

Look here Gradient Boosting regularization

clf = ensemble.GradientBoostingClassifier(**params)
clf.fit(X_train, y_train)

# compute test set deviance
test_deviance = np.zeros((params['n_estimators'],), dtype=np.float64)

for i, y_pred in enumerate(clf.staged_decision_function(X_test)):
    # clf.loss_ assumes that y_test[i] in {0, 1}
    test_deviance[i] = clf.loss_(y_test, y_pred)
Ibraim Ganiev
  • 8,934
  • 3
  • 33
  • 52