7

I am trying to solve a regression problem on Boston Dataset with help of random forest regressor.I was using GridSearchCV for selection of best hyperparameters.

Problem 1

Should I fit the GridSearchCV on some X_train, y_train and then get the best parameters.

OR

Should I fit it on X, y to get best parameters.(X, y = entire dataset)

Problem 2

Say If I fit it on X, y and get the best parameters and then build a new model on these best parameters. Now how should I train this new model on ?

Should I train the new model on X_train, y_train or X, y.

Problem 3

If I train new model on X,y then how will I validate the results ?

My code so far

   #Dataframes
    feature_cols = ['CRIM','ZN','INDUS','NOX','RM','AGE','DIS','TAX','PTRATIO','B','LSTAT']

    X = boston_data[feature_cols]
    y = boston_data['PRICE']

Train Test Split of Data

from sklearn.cross_validation import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1)

Grid Search to get best hyperparameters

from sklearn.grid_search import GridSearchCV
param_grid = { 
    'n_estimators': [100, 500, 1000, 1500],
    'max_depth' : [4,5,6,7,8,9,10]
}

CV_rfc = GridSearchCV(estimator=RFReg, param_grid=param_grid, cv= 10)
CV_rfc.fit(X_train, y_train)

CV_rfc.best_params_ 
#{'max_depth': 10, 'n_estimators': 100}

Train a Model on the max_depth: 10, n_estimators: 100

RFReg = RandomForestRegressor(max_depth = 10, n_estimators = 100, random_state = 1)
RFReg.fit(X_train, y_train)
y_pred = RFReg.predict(X_test)
y_pred_train = RFReg.predict(X_train)

RMSE: 2.8139766730629394

I just want some guidance with what the correct steps would be

Rookie_123
  • 1,975
  • 3
  • 15
  • 33
  • This is a question about *methodology*, and not programming, hence more appropriate for [Cross Validated](https://stats.stackexchange.com/help/on-topic) (and arguably off-topic here). – desertnaut Nov 23 '18 at 16:22

2 Answers2

3

In general, to tune the hyperparameters, you should always train your model over X_train, and use X_test to check the results. You have to tune the parameters based on the results obtained by X_test.

You should never tune hyperparameters over the whole dataset because it would defeat the purpose of the test/train split (as you correctly ask in the Problem 3).

FMarazzi
  • 583
  • 1
  • 5
  • 14
  • 1
    But then the hyperparameters obtained will be biased with the samples present in that `X_train` is what I feel – Rookie_123 Nov 23 '18 at 15:36
  • 2
    Rookie_123, you have a valid concern but any model/hyperparams will inherently be biased to the train set. If it were biased on a test set then you technically can't speak of a test set to begin with. – cantdutchthis Nov 23 '18 at 15:45
3

This is a valid concern indeed.

Problem 1

The GridSearchCV does cross validation indeed to find the proper set of hyperparameters. But you should still have a validation set to make sure that the optimal set of parameters is sound for it (so that gives in the end train, test, validation sets).

Problem 2

The GridSearchCV already gives you the best estimator, you don't need to train a new one. But actually CV is just to check if the building is sound, you can train then on the full dataset (see https://stats.stackexchange.com/questions/11602/training-with-the-full-dataset-after-cross-validation for a full detailed discussion).

Problem 3

What you already validated is the way you trained your model (i.e. you already validated that the hyperparameters you found are sound and the training works as expected for the data you have).

Matthieu Brucher
  • 21,634
  • 7
  • 38
  • 62
  • This is clarification is with respect to your answer for **Problem 1**: When you say `GridsearchCV` does cross validation, its cross validation will be still be limited to `X_train` and `y_train`, correct me if I am wrong – Rookie_123 Nov 24 '18 at 07:30
  • This is clarification is with respect to your answer for **Problem 3**: So no need to validate the model created on entire dataset with the best parameters obtained by `GridsearchCV` ? – Rookie_123 Nov 24 '18 at 07:34
  • Of course, CV will be done on the train dataset. Then you can validate the CV (best estimator) on the test dataset. – Matthieu Brucher Nov 24 '18 at 09:24
  • And indeed, you can use the whole dataset for the final training, as indicated on the datascience question. – Matthieu Brucher Nov 24 '18 at 09:24
  • @MatthieuBrucher Please let me know if you know an answer for this https://stackoverflow.com/questions/55604668/how-to-perform-gridsearchcv-with-cross-validation-in-python Thank you :) – EmJ Apr 10 '19 at 05:03