9

I like to run following workflow:

  1. Selecting a model for text vectorization
  2. Defining a list of parameters
  3. Applying a pipeline with GridSearchCV on the parameters, using LogisticRegression() as a baseline to find the best model parameters
  4. Save the best model (parameters)
  5. Load the best model paramerts so that we can apply a range of other classifiers on this defined model.

Here is code that you can reproduce:

GridSearch:

%%time
import numpy as np
import pandas as pd
from sklearn.externals import joblib
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from gensim.utils import simple_preprocess
np.random.seed(0)

data = pd.read_csv('https://pastebin.com/raw/dqKFZ12m')
X_train, X_test, y_train, y_test = train_test_split([simple_preprocess(doc) for doc in data.text],
                                                    data.label, random_state=0)

# Find best Tfidf model using LR
pipeline = Pipeline([
  ('tfidf', TfidfVectorizer(preprocessor=' '.join, tokenizer=None)),
  ('clf', LogisticRegression())
  ])

parameters = {
              'tfidf__max_df': [0.25, 0.5, 0.75, 1.0],
              'tfidf__smooth_idf': (True, False),
              'tfidf__norm': ('l1', 'l2', None),
              }

grid = GridSearchCV(pipeline, parameters, cv=2, verbose=1)
grid.fit(X_train, y_train)

print(grid.best_params_)

# Save model
#joblib.dump(grid.best_estimator_, 'best_tfidf.pkl', compress = 1) # this unfortunately includes the LogReg
joblib.dump(grid.best_params_, 'best_tfidf.pkl', compress = 1) # Only best parameters

Fitting 2 folds for each of 24 candidates, totalling 48 fits {'tfidf__smooth_idf': True, 'tfidf__norm': 'l2', 'tfidf__max_df': 0.25}

Load Model with best parameters:

from sklearn.model_selection import GridSearchCV

# Load best parameters
tfidf_params = joblib.load('best_tfidf.pkl')

pipeline = Pipeline([
  ('vec', TfidfVectorizer(preprocessor=' '.join, tokenizer=None).set_params(**tfidf_params)), # here is the issue?
  ('clf', LogisticRegression())
  ])

cval = cross_val_score(pipeline, X_train, y_train, scoring='accuracy', cv=5)
print("Cross-Validation Score: %s" % (np.mean(cval)))

ValueError: Invalid parameter tfidf for estimator TfidfVectorizer(analyzer='word', binary=False, decode_error='strict', dtype=, encoding='utf-8', input='content', lowercase=True, max_df=1.0, max_features=None, min_df=1, ngram_range=(1, 1), norm='l2', preprocessor=, smooth_idf=True, stop_words=None, strip_accents=None, sublinear_tf=False, token_pattern='(?u)\b\w\w+\b', tokenizer=None, use_idf=True, vocabulary=None). Check the list of available parameters with estimator.get_params().keys().

Question:

How can I load the best parameters of the Tfidf model?

Christopher
  • 2,120
  • 7
  • 31
  • 58

1 Answers1

4

This line:

joblib.dump(grid.best_params_, 'best_tfidf.pkl', compress = 1) # Only best parameters

saves the parameters of the pipeline, not the TfidfVectorizer. So do this:

pipeline = Pipeline([
  # Change the name to be same as before
  ('tfidf', TfidfVectorizer(preprocessor=' '.join, tokenizer=None)),
  ('clf', LogisticRegression())
  ])

pipeline.set_params(**tfidf_params)
Vivek Kumar
  • 35,217
  • 8
  • 109
  • 132
  • Returns the same error: `ValueError: Invalid parameter tfidf for estimator Pipeline(memory=None,...` – Christopher Jan 15 '19 at 12:19
  • 1
    @Christopher Ah yes. I have updated the answer. You need to use the same names for the pipeline components as you used before. Change `vec` to `tfidf` – Vivek Kumar Jan 15 '19 at 12:26