26

I'm using the current stable version 0.13 of scikit-learn. I'm applying a linear support vector classifier to some data using the class sklearn.svm.LinearSVC.

In the chapter about preprocessing in scikit-learn's documentation, I've read the following:

Many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the l1 and l2 regularizers of linear models) assume that all features are centered around zero and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.

Question 1: Is standardization useful for SVMs in general, also for those with a linear kernel function as in my case?

Question 2: As far as I understand, I have to compute the mean and standard deviation on the training data and apply this same transformation on the test data using the class sklearn.preprocessing.StandardScaler. However, what I don't understand is whether I have to transform the training data as well or just the test data prior to feeding it to the SVM classifier.

That is, do I have to do this:

scaler = StandardScaler()
scaler.fit(X_train)                # only compute mean and std here
X_test = scaler.transform(X_test)  # perform standardization by centering and scaling

clf = LinearSVC()
clf.fit(X_train, y_train)
clf.predict(X_test)

Or do I have to do this:

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)  # compute mean, std and transform training data as well
X_test = scaler.transform(X_test)  # same as above

clf = LinearSVC()
clf.fit(X_train, y_train)
clf.predict(X_test)

In short, do I have to use scaler.fit(X_train) or scaler.fit_transform(X_train) on the training data in order to get reasonable results with LinearSVC?

Krantz
  • 1,424
  • 1
  • 12
  • 31
pemistahl
  • 9,304
  • 8
  • 45
  • 75

2 Answers2

38

Neither.

scaler.transform(X_train) doesn't have any effect. The transform operation is not in-place. You have to do

X_train = scaler.fit_transform(X_train)

X_test = scaler.transform(X_test)

or

X_train = scaler.fit(X_train).transform(X_train)

You always need to do the same preprocessing on both training or test data. And yes, standardization is always good if it reflects your believe for the data. In particular for kernel-svms it is often crucial.

pemistahl
  • 9,304
  • 8
  • 45
  • 75
Andreas Mueller
  • 27,470
  • 8
  • 62
  • 74
  • Sure, I'm aware of this. I was just too lazy to post it (shame on me). The keypoint is whether to use `fit()` or `fit_transform()` on `X_train`. – pemistahl Feb 04 '13 at 18:49
  • 1
    Added a comment. To rephrase your question again, it is not about ``fit`` or ``fit_transform`` but whether to transform both the test and the training data. The answer is: definitely. If you transform only one, how could you expect to learn anything? They would not be from the same distribution any more. – Andreas Mueller Feb 04 '13 at 19:15
  • Alright, that's what I wanted to know. I'm pretty new to SVMs and was a bit confused. Anyway, thanks for your quick reaction. :) – pemistahl Feb 04 '13 at 19:44
  • @AndreasMueller do I need to scale my features if I am using gradient boosting classification?. – john doe Jul 22 '16 at 17:49
  • Not if you are using trees as weak learners. All tree-based models are agnostic to scaling. – Andreas Mueller Jul 26 '16 at 14:36
  • Are sure about calling `transform` on the test set? The example [in this doc page](http://scikit-learn.org/stable/auto_examples/preprocessing/plot_robust_scaling.html) uses `fit` on the test set instead of `transform`. – Agostino Nov 30 '16 at 22:41
  • @Agostino Which line? Doesn't look like that to me. If it does, it's a bug and we need to fix the example. – Andreas Mueller Dec 01 '16 at 16:56
  • You are right. No idea if it was edited or if I saw it somewhere else. Thanks. – Agostino Dec 04 '16 at 19:38
8

Why not use a Pipeline to chain (or combine) transformers and estimators in one go? Saves you the hassle of separately fitting and transforming your data and then using the estimator. It would save some space, too.

from sklearn.pipeline import Pipeline

pipe_lrSVC = Pipeline([('scaler', StandardScaler()), ('clf', LinearSVC())])
pipe_lrSVC.fit(X_train, y_train)
y_pred = pipe_lrSVC.predict(X_test)
vosirus
  • 113
  • 1
  • 6