4

Following is my code:

sklearn_tfidf = TfidfVectorizer(ngram_range= (3,3),stop_words=stopwordslist, norm='l2',min_df=0, use_idf=True, smooth_idf=False, sublinear_tf=True)
sklearn_representation = sklearn_tfidf.fit_transform(documents)

It generates tri gram by removing all the stopwords.

What I want it to allow those TRIGRAM what have stopword in their middle ( not in start and end)

Is there processor needs to be written for this. Need suggestions.

Ahmed Ashour
  • 5,179
  • 10
  • 35
  • 56
Shan Khan
  • 9,667
  • 17
  • 61
  • 111

1 Answers1

7

Yes, you need to supply your own analyzer function which will convert the documents to the features as per your requirements.

According to the documentation:

analyzer : string, {‘word’, ‘char’, ‘char_wb’} or callable

....
....
If a callable is passed it is used to extract the sequence of 
features out of the raw, unprocessed input.

In that custom callable you need to take care of first splitting the sentence into different parts, removing special chars like comma, braces, symbols etc, convert them to lower case, then convert them to n_grams.

The default implementation works on a single sentences in the following order:

  1. Decoding: the sentence according to given encoding (default 'utf-8')
  2. Preprocessing: convert the sentence to lower case
  3. Tokenizing: get single word tokens from the sentence (The default regexp selects tokens of 2 or more alphanumeric characters)
  4. Stop word removal: remove the single word tokens from the above step which are present in stop words
  5. N_gram creation: After stop word removal, the remaining tokens are then arranged in the required n_grams
  6. Remove too rare or too common features: Remove words which have frequency greater than max_df or lower than min_df.

You need to handle all this if you want to pass a custom callable to the analyzer param in the TfidfVectorizer.

OR

You can extend the TfidfVectorizer class and only override the last 2 steps. Something like this:

from sklearn.feature_extraction.text import TfidfVectorizer
class NewTfidfVectorizer(TfidfVectorizer):
    def _word_ngrams(self, tokens, stop_words=None):

        # First get tokens without stop words
        tokens = super(TfidfVectorizer, self)._word_ngrams(tokens, None)
        if stop_words is not None:
            new_tokens=[]
            for token in tokens:
                split_words = token.split(' ')

                # Only check the first and last word for stop words
                if split_words[0] not in stop_words and split_words[-1] not in stop_words:
                    new_tokens.append(token)
            return new_tokens

        return tokens

Then, use it like:

vectorizer = NewTfidfVectorizer(stop_words='english', ngram_range=(3,3))
vectorizer.fit(data)
Vivek Kumar
  • 35,217
  • 8
  • 109
  • 132
  • i tried to get tokens without stop word in that line, but it returns without the stop words. i think there is some other function that calls before it. im calling fit_transform – Shan Khan Apr 11 '18 at 22:24
  • @ShanKhan Please provide an example of sentences where you think this is happening. I have checked it with sample data and its working as expected. Please mind that the above code will remove trigrams where starting or ending word is stop word. – Vivek Kumar Apr 12 '18 at 06:53
  • Thanks, it worked - actually i was removing stop words in initial cleaning, when i remove that chunk - it worked Perfect! – Shan Khan Apr 14 '18 at 10:03
  • I couldn't find an example nor a reference about defining one's own analyzer. Can you please add one? – miguelmorin Jun 07 '23 at 17:18
  • 1
    @miguelmorin You can find the [source for the default analyzer here](https://github.com/scikit-learn/scikit-learn/blob/364c77e047ca08a95862becf40a04fe9d4cd2c98/sklearn/feature_extraction/text.py#L423). Please have a go at copying and changing that as your requirements and let me know if any issue. You need to look at [def _analyze()](https://github.com/scikit-learn/scikit-learn/blob/364c77e047ca08a95862becf40a04fe9d4cd2c98/sklearn/feature_extraction/text.py#L75) also – Vivek Kumar Jun 08 '23 at 06:49