I am using optunta + catboost to optimise and train some boosted trees. I would like to know the correct way to optimise my hyperparameters to maximise accuracy (rather than minimise log-loss).
At the moment my code is:
def objective(trial):
pram = {
'depth' = trial.suggest_int('depth', 4, 9),
'learning_rate' = trial.suggest_float('learning_rate', 1e-5, 1e-1),
'iterations' = trial.suggest_float('iterations', 100, 7500)
'loss_function' = 'Logloss',
'custom_loss' = 'Accuracy'
}
for step in range(50):
cv = cb.cv(train_pool, param, fold_count = 3, verbose = 1000)
acc = np.max(cv['test-Accuracy-mean'])
trial.report(acc, step)
if trial.should_prune():
raise optuna.TrialPruned()
return acc
study = optuna.create_study(direction = 'maximize')
study.optimize(objective, n_trials = 50)
Would this be the correct way to tune hyperparameters to maximise accuracy rather than minimise log-loss?