53

I'm finding it difficult to understand how to fix a Pipeline I created (read: largely pasted from a tutorial). It's python 3.4.2:

df = pd.DataFrame
df = DataFrame.from_records(train)

test = [blah1, blah2, blah3]

pipeline = Pipeline([('vectorizer', CountVectorizer()), ('classifier', RandomForestClassifier())])

pipeline.fit(numpy.asarray(df[0]), numpy.asarray(df[1]))
predicted = pipeline.predict(test)

When I run it, I get:

TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array.

This is for the line pipeline.fit(numpy.asarray(df[0]), numpy.asarray(df[1])).

I've experimented a lot with solutions through numpy, scipy, and so forth, but I still don't know how to fix it. And yes, similar questions have come up before, but not inside a pipeline. Where is it that I have to apply toarray or todense?

Ada Stra
  • 1,501
  • 3
  • 13
  • 14

6 Answers6

86

Unfortunately those two are incompatible. A CountVectorizer produces a sparse matrix and the RandomForestClassifier requires a dense matrix. It is possible to convert using X.todense(). Doing this will substantially increase your memory footprint.

Below is sample code to do this based on http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html which allows you to call .todense() in a pipeline stage.

class DenseTransformer(TransformerMixin):

    def fit(self, X, y=None, **fit_params):
        return self

    def transform(self, X, y=None, **fit_params):
        return X.todense()

Once you have your DenseTransformer, you are able to add it as a pipeline step.

pipeline = Pipeline([
     ('vectorizer', CountVectorizer()), 
     ('to_dense', DenseTransformer()), 
     ('classifier', RandomForestClassifier())
])

Another option would be to use a classifier meant for sparse data like LinearSVC.

from sklearn.svm import LinearSVC
pipeline = Pipeline([('vectorizer', CountVectorizer()), ('classifier', LinearSVC())])
maxymoo
  • 35,286
  • 11
  • 92
  • 119
David Maust
  • 8,080
  • 3
  • 32
  • 36
  • Thanks a lot! I am experimenting with different classifiers, in part to learn, and in part to find what works best. Truth be told, for my case I get by far best results with multinomial NB. I'll experiment with your code, thanks so much for the exhaustive answer. – Ada Stra Feb 07 '15 at 17:23
  • Sounds fun. RandomForest is good for dense numeric data. I've found it doesn't scale that well for sparse text features. If you do want to try it on text, you might try adding a feature selection stage first. That can sometimes work well. My favorites for text have been LinearSVC and SGDClassifier using either loss='modified_huber' or loss='log'. – David Maust Feb 07 '15 at 17:28
  • What parameters to use for a clasifer based POS tagger application using SGD? – stackit Sep 12 '15 at 07:48
  • This worked for me! I was using Naive Bayes in the pipeline which also requires a dense matrix. – Joselo May 31 '22 at 16:37
  • If you get a downstream error saying `TypeError: np.matrix is not supported.` Use `X.toarray()` in the transform method instead of `X.todense()`. See: https://stackoverflow.com/questions/30416695/numpy-and-scipy-difference-between-todense-and-toarray – Finncent Price Mar 22 '23 at 16:45
37

The most terse solution would be use a FunctionTransformer to convert to dense: this will automatically implement the fit, transform and fit_transform methods as in David's answer. Additionally if I don't need special names for my pipeline steps, I like to use the sklearn.pipeline.make_pipeline convenience function to enable a more minimalist language for describing the model:

from sklearn.preprocessing import FunctionTransformer

pipeline = make_pipeline(
     CountVectorizer(), 
     FunctionTransformer(lambda x: x.todense(), accept_sparse=True), 
     RandomForestClassifier()
)
maxymoo
  • 35,286
  • 11
  • 92
  • 119
  • 1
    I just tried this and saw the `accept_sparse` parameter of `FunctionTransformer`. You need to set it to `True`. – Cory Oct 04 '16 at 17:34
  • 1
    For those of you that use @maxymoo's solution as much as I do, FunctionTransformer can be imported `from sklearn.preprocessing import FunctionTransformer` – Jarad Sep 13 '17 at 04:28
  • I get an error when adding the FunctionTransformer: `AttributeError: Can't pickle local object 'main large..' pipeline`. Any hints on how to fix it? – Guido Sep 06 '18 at 07:01
  • @guido use `dill` instead of `pickle` – maxymoo Sep 06 '18 at 23:56
  • 1
    @Guido I am guessing you're trying to use the pipeline inside some cross validation / grid search. Under the hood, the pipeline is pickled and the problem is that `lambda` functions cannot be pickled. Therefore, you have to extract the `lambda` functionality into a regular function `def to_dense(x):` and use it instead of the `lambda`. – Dror Dec 12 '18 at 09:00
17

Random forests in 0.16-dev now accept sparse data.

Gilles Louppe
  • 2,694
  • 1
  • 12
  • 8
5

you can change pandas Series to arrays using the .values method.

pipeline.fit(df[0].values, df[1].values)

However I think the issue here happens because CountVectorizer() returns a sparse matrix by default, and cannot be piped to the RF classifier. CountVectorizer() does have a dtype parameter to specify the type of array returned. That said usually you need to do some sort of dimensionality reduction to use random forests for text classification, because bag of words feature vectors are very long

JAB
  • 12,401
  • 6
  • 45
  • 50
0

I found that FunctionTransformer and using x.toarray() rather than x.todense() worked for me.

'pipeline': Pipeline(
        [
            ('vect', TfidfVectorizer()),
            ('dense', FunctionTransformer(lambda x: x.toarray(), accept_sparse=True)),
            ('clf', GaussianProcessClassifier())
        ]
    )
-1

with this pipeline add TfidTransformer plus

        pipelinEx = Pipeline([('bow',vectorizer),
                           ('tfidf',TfidfTransformer()),
                           ('to_dense', DenseTransformer()), 
                           ('classifier',classifier)])

The first line above, gets the word counts for the documents in a sparse matrix form. However, in practice, you may be computing tfidf scores with TfidfTransformer on a set of new unseen documents. Then, by calling tfidf transformer.transform(vectorizer) you will finally be computing the tf-idf scores for your docs. Internally this is computing the tf * idf multiplication where term frequency is weighted by its idf values.

Max Kleiner
  • 1,442
  • 1
  • 13
  • 14