3

I'm trying to implement the naive Bayes classifier for sentiment analysis. I plan to use the TF-IDF weighting measure. I'm just a little stuck now. NB generally uses the word(feature) frequency to find the maximum likelihood. So how do I introduce the TF-IDF weighting measure in naive Bayes?

Charles
  • 50,943
  • 13
  • 104
  • 142
karthik A
  • 655
  • 1
  • 11
  • 19
  • well were you able to find out the way as to how this can be done, since, I am too stuck with the same problem. I am trying to search about the same but getting nothing definite. – POOJA GUPTA May 19 '16 at 12:07

1 Answers1

2

You use the TF-IDF weights as features/predictors in your statistical model. I suggest to use either gensim [1]or scikit-learn [2] to compute the weights, which you then pass to your Naive Bayes fitting procedure.

The scikit-learn 'working with text' tutorial [3] might also be of interest.

[1] http://scikit-learn.org/dev/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html

[2] http://radimrehurek.com/gensim/models/tfidfmodel.html

[3] http://scikit-learn.github.io/scikit-learn-tutorial/working_with_text_data.html

kyrre
  • 626
  • 2
  • 9
  • 24
  • Updated link to point 3 - https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html – Bee Apr 26 '22 at 18:34