7

Data pre-processers such as StandardScaler should be used to fit_transform the train set and only transform (not fit) the test set. I expect the same fit/transform process applies to cross-validation for tuning the model. However, I found cross_val_score and GridSearchCV fit_transform the entire train set with the preprocessor (rather than fit_transform the inner_train set, and transform the inner_validation set). I believe this artificially removes the variance from the inner_validation set which makes the cv score (the metric used to select the best model by GridSearch) biased. Is this a concern or did I actually miss anything?

To demonstrate the above issue, I tried the following three simple test cases with the Breast Cancer Wisconsin (Diagnostic) Data Set from Kaggle.

  1. I intentionally fit and transform the entire X with StandardScaler()
X_sc = StandardScaler().fit_transform(X)
lr = LogisticRegression(penalty='l2', random_state=42)
cross_val_score(lr, X_sc, y, cv=5)
  1. I include SC and LR in the Pipeline and run cross_val_score
pipe = Pipeline([
    ('sc', StandardScaler()),
    ('lr', LogisticRegression(penalty='l2', random_state=42))
])
cross_val_score(pipe, X, y, cv=5)
  1. Same as 2 but with GridSearchCV
pipe = Pipeline([
    ('sc', StandardScaler()),
    ('lr', LogisticRegression(random_state=42))
])
params = {
    'lr__penalty': ['l2']
}
gs=GridSearchCV(pipe,
param_grid=params, cv=5).fit(X, y)
gs.cv_results_

They all produce the same validation scores. [0.9826087 , 0.97391304, 0.97345133, 0.97345133, 0.99115044]

Kai Zhao
  • 143
  • 1
  • 7

2 Answers2

8

No, sklearn doesn't do fit_transform with entire dataset.

To check this, I subclassed StandardScaler to print the size of the dataset sent to it.

class StScaler(StandardScaler):
    def fit_transform(self,X,y=None):
        print(len(X))
        return super().fit_transform(X,y)

If you now replace StandardScaler in your code, you'll see dataset size passed in first case is actually bigger.

But why does the accuracy remain exactly same? I think this is because LogisticRegression is not very sensitive to feature scale. If we instead use a classifier that is very sensitive to scale, like KNeighborsClassifier for example, you'll find accuracy between two cases start to vary.

X,y = load_breast_cancer(return_X_y=True)
X_sc = StScaler().fit_transform(X)
lr = KNeighborsClassifier(n_neighbors=1)
cross_val_score(lr, X_sc,y, cv=5)

Outputs:

569
[0.94782609 0.96521739 0.97345133 0.92920354 0.9380531 ]

And the 2nd case,

pipe = Pipeline([
    ('sc', StScaler()),
    ('lr', KNeighborsClassifier(n_neighbors=1))
])
print(cross_val_score(pipe, X, y, cv=5))

Outputs:

454
454
456
456
456
[0.95652174 0.97391304 0.97345133 0.92920354 0.9380531 ]

Not big change accuracy-wise, but change nonetheless.

Shihab Shahriar Khan
  • 4,930
  • 1
  • 18
  • 26
  • This is very helpful and the exact answer I am looking for. I tried myself with `RandomForestClassifier` and the CV scores now show differences. I learned a lot from your response. Thanks! – Kai Zhao Aug 26 '19 at 07:11
  • 2
    Shihab, thanks again. I am surprised that many Bootcamp students like me actually apply pre-processors incorrectly while using GridSearchCV (pre-process the train data before tuning the model with GridSearch). I am going to write a blog post on this topic. If you don't mind, I would like to quote some of the text and code from this discussion and credit you for the solution. Let me know if this is ok. Thanks! – Kai Zhao Sep 01 '19 at 16:43
  • Thanks!. Will forward the link once done. Your comments will be highly appreciated. – Kai Zhao Sep 01 '19 at 19:54
  • 2
    Shihab, here is the link to the blog post "Pre-Process Data with Pipeline to Prevent Data Leakage during Cross-Validation". Let me know if you have any comments. https://towardsdatascience.com/pre-process-data-with-pipeline-to-prevent-data-leakage-during-cross-validation-e3442cca7fdc – Kai Zhao Sep 04 '19 at 03:52
  • @ShihabShahriarKhan Shihab, thank you and Kai for this useful QA. When I call ``` pipe = Pipeline([ ('sc', StandardScaler()), ('model', model(**parameters, random_state=42)) ])``` and then I call ```learning_curve(pipe, X_train, y_train, cv=RepeatedStratifiedKFold(n_splits=nb_splits, n_repeats=nb_repeats, random_state=42), scoring='accuracy') ``` does that also apply standardization ONLY to training and apply the transformation to validation (i.e. avoid data leakage) inside the ```cv loop``` ? – Perl Del Rey Nov 18 '19 at 19:09
  • Recently I have tried to run cross validation on a pipeline but it gives me NaN value. But If I separate the Column transformer and fit_transform the training data and do cross validation on top of it with the model , it is giving me the accuracy score. I am unable to find why it's behaving like this strange way. Can u please share some idea on the same? – dg S Aug 11 '20 at 15:27
  • @dgS, if you're using `cross_validate` or `cross_val_score`, note that they by default don't raise any internal errors and set output to None. To check, rerun with `error_score=raise` – Shihab Shahriar Khan Aug 11 '20 at 20:21
  • @ShihabShahriarKhan, Thanks! I ran with the parameter `error_score=raise`, and got the error message `ValueError: Input contains NaN, infinity or a value too large for dtype('float64').`. I checked my data and couldn't find any NaN values. `np.isnan(X_train.any())` and `np.isfinite(X_train.all())` returned `False` and `True` respectively. But as I mentioned earlier if I run processing part separately and and run `cross_val_score` on that with model, I am getting accuracy score. is there something I am missing? Please help ! – dg S Aug 12 '20 at 07:12
  • sry, can't think of any obvious reason from the top of my head – Shihab Shahriar Khan Aug 12 '20 at 12:04
2

Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set X_test, y_test

A solution to this problem is a procedure called cross-validation (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:

A model is trained using of the folds as training data; the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy). The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples is very small.

enter image description here

More over if your model is already biased from starting we have to make it balance by SMOTE /Oversampling of Less Target Variable/Under-sampling of High target variable.

  • Thanks for the quick response and detailed explanation. The second paragraph in your response is where my question really focuses on. Using `split 1` in your figure as an example: – Kai Zhao Aug 26 '19 at 06:42
  • Welcome. If you satisfied with my answer please do vote. – Aniruddha Choudhury Aug 26 '19 at 06:46
  • Thanks for the quick response and detailed explanation. The second paragraph in your response is where my question really focuses on. Using `split 1` in your figure as an example: during CV, preprocessors like StandardScaler() should `fit.transform` folds 2-5 (inner_train set) and only `transform` fold 1 (validation set). I don't think cross_val_score() nor GridSearchCV does this when calculating the CV score. Instead, StandardScale() is fit.transform entire folds 1-5, which (if true) I think is a problem. Hope this clarifies my question. Thanks again. – Kai Zhao Aug 26 '19 at 06:54
  • Aniruddha, thanks for posting sklearn's cross-validation figure in your response. I actually used it in my blog post. – Kai Zhao Sep 04 '19 at 03:54
  • Ok no problem. You Vote my post. – Aniruddha Choudhury Sep 04 '19 at 03:55