1

Using databricks I have basically copied and pasted the same code from here: https://www.dataiku.com/learn/guide/code/python/advanced-xgboost-tuning.html

I am getting this bug:

train = data.sample(frac=0.70, random_state=123)
valid = data.loc[~data.index.isin(train.index), :]

y_train = train['target']
X_train = train.drop(['target'], axis=1) 
y_valid = test['target']
X_valid = test.drop(['target'], axis=1) 
def objective(space):

    clf = xgb.XGBClassifier(n_estimators = 10000,
                            max_depth = space['max_depth'],
                            min_child_weight = space['min_child_weight'],
                            subsample = space['subsample'])

    eval_set  = [( train, y_train), ( valid, y_valid)]

    clf.fit(train[col_train], y_train,
            eval_set=eval_set, eval_metric="auc",
            early_stopping_rounds=30)

    pred = clf.predict_proba(valid)[:,1]
    auc = roc_auc_score(y_valid, pred)
    print("SCORE:", auc)

    return{'loss':1-auc, 'status': STATUS_OK }


space ={
        'max_depth': hp.quniform("x_max_depth", 5, 30, 1),
        'min_child_weight': hp.quniform ('x_min_child', 1, 10, 1),
        'subsample': hp.uniform ('x_subsample', 0.8, 1)
    }


trials = Trials()
best = fmin(fn=objective,
            space =space,
            algo=tpe.suggest,
            max_evals=100,
            trials=trials)


print(best)

The bug I get is as follows:

ValueError: invalid number of arguments
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<command-3897094> in <module>()
      4             algo=tpe.suggest,
      5             max_evals=100,
----> 6             trials=trials)
      7 
      8 

ValueError: invalid number of arguments

Any insight would help please. First question on stack overflow!

1 Answers1

0

The blog Advanced XGBoost tuning in Python you refered is post in August 22, 2016, as the figure below.

enter image description here

I think it's for the old version, may be not situable for using the lastest version of hyperopt package.

So please see the latest hyperopt wiki page for FMin. Here is a simple sample code as below.

from hyperopt import fmin, tpe, hp
best = fmin(fn=lambda x: x ** 2,
    space=hp.uniform('x', -10, 10),
    algo=tpe.suggest,
    max_evals=100)
print best

You can see that the space type should be hyperopt.pyll.Apply, not a Python dictionary, please see the code from the GitHub repo as the figure below.

enter image description here

Peter Pan
  • 23,476
  • 4
  • 25
  • 43