I am carrying out a grid-search
for a SVR design which has a time series split. My problem is the grid-search takes roughly 30+ minutes which is too long. I have a large data set consisting of 17,800 bits of data however, this duration is too long. Is there any way that I could reduce this duration? My code is:
from sklearn.svm import SVR
from sklearn.model_selection import TimeSeriesSplit
from sklearn import svm
from sklearn.preprocessing import MinMaxScaler
from sklearn import preprocessing as pre
X_feature = X_feature.reshape(-1, 1)
y_label = y_label.reshape(-1,1)
param = [{'kernel': ['rbf'], 'gamma': [1e-2, 1e-3, 1e-4, 1e-5],
'C': [1, 10, 100, 1000]},
{'kernel': ['poly'], 'C': [1, 10, 100, 1000], 'degree': [1, 2, 3, 4]}]
reg = SVR(C=1)
timeseries_split = TimeSeriesSplit(n_splits=3)
clf = GridSearchCV(reg, param, cv=timeseries_split, scoring='neg_mean_squared_error')
X= pre.MinMaxScaler(feature_range=(0,1)).fit(X_feature)
scaled_X = X.transform(X_feature)
y = pre.MinMaxScaler(feature_range=(0,1)).fit(y_label)
scaled_y = y.transform(y_label)
clf.fit(scaled_X,scaled_y )
My data for scaled y is:
[0.11321139]
[0.07218848]
...
[0.64844211]
[0.4926122 ]
[0.4030334 ]]
And my data for scaled X is:
[[0.2681013 ]
[0.03454225]
[0.02062136]
...
[0.92857565]
[0.64930691]
[0.20325924]]