I am creating a classification and regression models using Random forest (DRF) and GBM in H2O.ai. I believe that I don't need to normalize (or scale) the data as it's un-neccessary rather more harmful as it might smooth out the nonlinear nature of the model. Could you please confirm if my understanding is correct.
Asked
Active
Viewed 5,590 times
1
-
It's hard to a priori to determine which tree model will be better. (See no-free-lunch theorem https://en.wikipedia.org/wiki/No_free_lunch_theorem ). In a recent problem I've been working on, I found that one feature has 80% importance. I think this made RF worse, because it built lots of trees based on this feature. I found XGBoost worked slightly better. I recommend trying H2O's AutoML to see which algorithm works the best and go from there. (And yes, you don't have to scale your features with Trees.) – Clem Wang Feb 19 '20 at 17:41
1 Answers
8
You don't need to do anything to your data when using H2O - all algorithms handle numeric/categorical/string columns automatically. Some methods do internal standardization automatically, but the tree methods don't and don't need to (split at age > 5 and income < 100000 is fine). As to whether it's "harmful" depends on what you're doing, usually it's a good idea to let the algorithm do the standardization, unless you know exactly what you are doing. One example is clustering, where distances depend on the scaling (or lack thereof) of the data.

Arno Candel
- 491
- 2
- 2