Using the MinMaxScaler from sklearn, I scale my data as below.
min_max_scaler = preprocessing.MinMaxScaler()
X_train_scaled = min_max_scaler.fit_transform(features_train)
X_test_scaled = min_max_scaler.transform(features_test)
However, when printing X_test_scaled.min(), I have some negative values (the values do not fall between 0 and 1). This is due to the fact that the lowest value in my test data was lower than the train data, of which the min max scaler was fit.
How much effect does not having exactly normalized data between 0 and 1 values have on the SVM classifier? Also, is it bad practice to concatenate the train and test data into a single matrix, perform min-max scaling to ensure values are between 0 and 1, then seperate them again?