My code is here:
xgb = XGBRegressor(
max_depth= int(3.1847420232679196),
n_estimators = int(27.03977712011383),
subsample = 0.9130850193972424,
tree_method = 'gpu_hist',
gpu_id=0
)
xgb.fit(x_train, y_train)
r2_score(xgb.predict(x_test), y_test), r2_score(xgb.predict(x_train), y_train)
and the result is (0.9322279800331514, 0.9838467922872913)
.but when i don’t use the GPU the result is different.
xgb = XGBRegressor(
max_depth= int(3.1847420232679196),
n_estimators = int(27.03977712011383),
subsample = 0.9130850193972424,
)
xgb.fit(x_train, y_train)
r2_score(xgb.predict(x_test), y_test), r2_score(xgb.predict(x_train), y_train)
the result is (0.6763052034789518, 0.9805904489567225)
.
My GPU: NVIDIA GeForce MX250.
when i use this code on other computer(the cuda is 2080ti),the result is different too.