6

I try to find the optimal number of topics using LDA model of sklearn. To do this I calculate perplexity by referring code on https://gist.github.com/tmylk/b71bf7d3ec2f203bfce2.

But when I increase the number of topics, perplexity always increase irrationally. Am I wrong in implementations or just it gives right values?

from __future__ import print_function
from time import time

from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
n_samples = 0.7
n_features = 1000
n_top_words = 20
dataset = kickstarter['short_desc'].tolist()
data_samples = dataset[:int(len(dataset)*n_samples)]
test_samples = dataset[int(len(dataset)*n_samples):]

Use tf (raw term count) features for LDA.

print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
                                max_features=n_features,
                                stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
t0 = time()
tf_test = tf_vectorizer.transform(test_samples)
print("done in %0.3fs." % (time() - t0))

Calculate Perplexity for (5, 10, 15 ... 100 topics)

for i in xrange(5,101,5):
    n_topics = i

    print("Fitting LDA models with tf features, "
          "n_samples=%d, n_features=%d n_topics=%d "
          % (n_samples, n_features, n_topics))

    lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
                                    learning_method='online',
                                    learning_offset=50.,
                                    random_state=0)
    t0 = time()
    lda.fit(tf)

    train_gamma = lda.transform(tf)
    train_perplexity = lda.perplexity(tf, train_gamma)

    test_gamma = lda.transform(tf_test)
    test_perplexity = lda.perplexity(tf_test, test_gamma)

    print('sklearn preplexity: train=%.3f, test=%.3f' %
          (train_perplexity, test_perplexity))

    print("done in %0.3fs." % (time() - t0))

Results of Perplexity Calculation

Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=5 
sklearn preplexity: train=9500.437, test=12350.525
done in 4.966s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=10 
sklearn preplexity: train=341234.228, test=492591.925
done in 4.628s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=15 
sklearn preplexity: train=11652001.711, test=17886791.159
done in 4.337s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=20 
sklearn preplexity: train=402465954.270, test=609914097.869
done in 4.351s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=25 
sklearn preplexity: train=14132355039.630, test=21945586497.205
done in 4.438s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=30 
sklearn preplexity: train=499209051036.715, test=770208066318.557
done in 4.076s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=35 
sklearn preplexity: train=16539345584599.268, test=24731601176317.836
done in 4.230s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=40 
sklearn preplexity: train=586526357904887.250, test=880809950700756.625
done in 4.596s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=45 
sklearn preplexity: train=20928740385934636.000, test=31065168894315760.000
done in 4.563s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=50 
sklearn preplexity: train=734804198843926784.000, test=1102284263786783616.000
done in 4.790s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=55 
sklearn preplexity: train=24747026375445286912.000, test=36634830286916853760.000
done in 4.839s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=60 
sklearn preplexity: train=879215493067590729728.000, test=1268331920975308783616.000
done in 4.827s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=65 
sklearn preplexity: train=30267393208097070645248.000, test=43678395923698735382528.000
done in 4.705s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=70 
sklearn preplexity: train=1091388615092136975532032.000, test=1564111432914603675222016.000
done in 4.626s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=75 
sklearn preplexity: train=37463573890268863118966784.000, test=51513357456275195169865728.000
done in 5.034s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=80 
sklearn preplexity: train=1281758440147129243608809472.000, test=1736796133443165299937378304.000
done in 5.348s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=85 
sklearn preplexity: train=45100838968058242714191265792.000, test=62725627465378386290422054912.000
done in 4.987s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=90 
sklearn preplexity: train=1555576278144903954081448460288.000, test=2117105172204280105824751190016.000
done in 5.032s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=95 
sklearn preplexity: train=52806759455785055803020813533184.000, test=70510180325555822379548402515968.000
done in 5.284s.
Fitting LDA models with tf features, n_samples=0, n_features=1000 n_topics=100 
sklearn preplexity: train=1885916623308147578324101753733120.000, test=2505878598724106449894719231098880.000
done in 5.374s.
JonghoKim
  • 1,965
  • 7
  • 21
  • 44
  • Can I ask why you reverted the peer approved edits? I think this question is interesting, but it is extremely difficult to interpret in its current state. The poor grammar makes it essentially unreadable. – elz Aug 18 '17 at 14:41
  • Apart from the grammatical problem, what the corrected sentence means is different from what I want. For example, if you increase the number of topics, the perplexity should decrease in general I think. Even though, present results do not fit, it is not such a value to increase or decrease. – JonghoKim Aug 18 '17 at 14:46
  • OK, I still think this is essentially what the edits reflected, although with the emphasis on monotonic (either always increasing or always decreasing) instead of simply decreasing. Your current question statement is confusing as your results do not "always increase" with number of topics, but instead sometimes increase and sometimes decrease (which I believe you are referring to as "irrational" here - this was probably lost in translation - irrational is a different word mathematically and doesn't make sense in this context, I would suggest changing it) – elz Aug 18 '17 at 15:01
  • 1
    Thanks a lot :) I would reflect your suggestion soon. – JonghoKim Aug 18 '17 at 15:03
  • 1
    Hi! Did you find a solution? I experience the same problem.. perplexity is increasing..as the number of topics is increasing. – mammask Sep 25 '17 at 11:03

1 Answers1

9

There is a bug in scikit-learn causing the perplexity to increase:

https://github.com/scikit-learn/scikit-learn/issues/6777

user179041
  • 136
  • 1
  • 7