I am trying to fix the randomization in my code but every time I run, I get different best score and best parameters. The results are no too far apart, but how can I fix the result to get the same best score and parameters every time I run?
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 27)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
clf = DecisionTreeClassifier(random_state=None)
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5,6,8,10,20,30,50],
'max_features': [10,20,30,40,50]
}
skf = StratifiedKFold(n_splits=10, random_state=None)
skf.get_n_splits(X_train, y_train)
grid_search = GridSearchCV(clf, param_grid=parameter_grid, cv=skf, scoring='precision')
grid_search.fit(X_train, y_train)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
clf = grid_search.best_estimator_
y_pred_iris = clf.predict(X_test)
print(confusion_matrix(y_test,y_pred),"\n")
print(classification_report(y_test,y_pred),"\n")