0

I am working on a classification problem with a highly imbalanced dataset. I am trying to use SMOTEENN in the grid search pipeline, however I keep getting this ValueError:

ValueError: Invalid parameter randomforestclassifier for estimator Pipeline(memory=None,
         steps=[('preprocessor_X',
                 ColumnTransformer(n_jobs=None, remainder='drop',
                                   sparse_threshold=0.3,
                                   transformer_weights=None,
                                   transformers=[('num',
                                                  Pipeline(memory=None,
                                                           steps=[('scaler',
                                                                   StandardScaler(copy=True,
                                                                                  with_mean=True,
                                                                                  with_std=True))],
                                                           verbose=False),
                                                  ['number_of_participants',
                                                   'count_timely_submission',
                                                   'count_by_self',
                                                   'count_at_ra...
                                                         class_weight='balanced',
                                                         criterion='gini',
                                                         max_depth=None,
                                                         max_features='auto',
                                                         max_leaf_nodes=None,
                                                         max_samples=None,
                                                         min_impurity_decrease=0.0,
                                                         min_impurity_split=None,
                                                         min_samples_leaf=1,
                                                         min_samples_split=2,
                                                         min_weight_fraction_leaf=0.0,
                                                         n_estimators=100,
                                                         n_jobs=None,
                                                         oob_score=False,
                                                         random_state=0,
                                                         verbose=0,
                                                         warm_start=False))],
                          verbose=False))],
         verbose=False). Check the list of available parameters with `estimator.get_params().keys()`.

I found online that SMOTEENN can be used with GridSearchCV if the Pipeline from imblearn is imported. I am using the Pipeline from imblearn but it still gives me this error.

The issue first started when I tried to use SMOTEENN and get the X and y variables. I have a prepare_data() function that breaks the data into X,y. I wanted to use SMOTEENN in that function and return the balanced data. However, one of my features is of type string - and needs to be put in OneHotEncoder. For some reason, SMOTEENN doesn't seem to process strings. Thus, I needed to use it in the pipeline so that SMOTEENN would be effective post-preprocessing.

I am pasting my pipeline code below. Any help or explanation would be much appreciated! Thank you!

def ML_RandomF(X, y, random_state, n_folds, oneHot_ftrs, 
               num_ftrs, ordinal_ftrs, ordinal_cats, beta, test_size, score_type):

    scoring = {'roc_auc_score': make_scorer(roc_auc_score), 
               'f_beta': make_scorer(fbeta_score, beta=beta, average='weighted'), 
               'accuracy': make_scorer(accuracy_score)}

    X_other, X_test, y_other, y_test = train_test_split(X, y, test_size=test_size, random_state = random_state)
    kf = StratifiedKFold(n_splits=n_folds,shuffle=True,random_state=random_state)  

    reg = RandomForestClassifier(random_state=random_state, n_estimators=100, class_weight="balanced")
    sme = SMOTEENN(random_state=random_state)

    model = Pipeline([
        ('sampling', sme),
        ('classification', reg)])

    # ordinal encoder
    ordinal_transformer = Pipeline(steps=[
        ('ordinal', OrdinalEncoder(categories = ordinal_cats))])

    # oneHot encoder
    onehot_transformer = Pipeline(steps=[
        ('ordinal', OneHotEncoder(sparse=False, handle_unknown='ignore'))])

    # standard scaler
    numeric_transformer = Pipeline(steps=[
        ('scaler', StandardScaler())])

    preprocessor_X = ColumnTransformer(
        transformers=[
            ('num', numeric_transformer, num_ftrs),
            ('oneH', onehot_transformer, oneHot_ftrs),
            ('ordinal', ordinal_transformer, ordinal_ftrs)])

    pipe = Pipeline(steps=[('preprocessor_X', preprocessor_X), ('model', model)])

    param_grid = {'randomforestclassifier__max_depth': [3,5,7,10], 
                  'randomforestclassifier__min_samples_split': [10,25,40]}
    grid = GridSearchCV(pipe,param_grid=param_grid,
                        scoring=scoring,cv=kf, refit=score_type,
                        return_train_score=True,iid=True, verbose=2, n_jobs=-1)

    grid.fit(X_other, y_other)
    return grid, grid.score(X_test, y_test)
Sandeep Kumar
  • 2,397
  • 5
  • 30
  • 37

1 Answers1

0

You had named RandomForestClassifier as classification and that pipeline is named as model in your next pipeline. Hence you have to change your param_grid as follows


param_grid = {'model__classification__max_depth': [3,5,7,10], 
              'model__classification__min_samples_split': [10,25,40]}
Venkatachalam
  • 16,288
  • 9
  • 49
  • 77
  • Actually `classification` is in `model` which itself is in `pipe` so it is a bit more convoluted. `model__classification__max_depth`. However, you don't need a nested pipeline in this case. – glemaitre Jan 08 '20 at 13:08
  • yes, you are right. I will update my answer. Thanks – Venkatachalam Jan 08 '20 at 13:39