All examples I've managed to find of using xgboost with custom cost functions involve writing a cost function which takes two arguments, the first being a vector of predictions, the second being an xgboost decision matrix object. This function then needs to return a vector of first and second derivatives.
Certain loss functions however, might require an additional parameter (which is a hyperparameter of the training process). A (hopefully not too esoteric) example of this is Huber Loss, which is implemented as pseudo-Huber Loss in order to be twice differentiable. An example of how one can do this in xgboost can be found in this excellent answer: https://stackoverflow.com/a/45370500/8363008
Here, the hyperparameter "scale" is used in the function, but not defined anywhere, I can only infer it is assumed to be a global variable.
Is there a way to somehow make the scale parameter a further argument to the cost function, and then pass this at training time, allowing for a less clumsy way of, for example, grid-searching over the scale parameter, than constantly reassigning a value to a global variable ?