I am trying out multi-class classification with xgboost and I've built it using this code,
clf = xgb.XGBClassifier(max_depth=7, n_estimators=1000)
clf.fit(byte_train, y_train)
train1 = clf.predict_proba(train_data)
test1 = clf.predict_proba(test_data)
This gave me some good results. I've got log-loss below 0.7 for my case. But after looking through few pages I've found that we have to use another objective in XGBClassifier for multi-class problem. Here's what is recommended from those pages.
clf = xgb.XGBClassifier(max_depth=5, objective='multi:softprob', n_estimators=1000,
num_classes=9)
clf.fit(byte_train, y_train)
train1 = clf.predict_proba(train_data)
test1 = clf.predict_proba(test_data)
This code is also working but it's taking a lot of time to complete compared when to my first code.
Why is my first code also working for multi-class case? I have checked that it's default objective is binary:logistic used for binary classification but it worked really well for multi-class? Which one should I use if both are correct?