9

When I run gensim's LdaMulticore model on a machine with 12 cores, using:

lda = LdaMulticore(corpus, num_topics=64, workers=10)

I get a logging message that says

using serial LDA version on this node  

A few lines later, I see another loging message that says

training LDA model using 10 processes

When I run top, I see 11 python processes have been spawned, but 9 are sleeping, I.e. only one worker is active. The machine has 24 cores, and is not overwhelmed by any means. Why isn't LdaMulticore running in parallel mode?

Edward Newell
  • 17,203
  • 7
  • 34
  • 36
  • One reason might due to the [slow loading of the `corpus`](https://github.com/piskvorky/gensim/issues/288). Test your code to see how much time it takes. – Jon Dec 14 '15 at 06:03

1 Answers1

15

First, make sure you have installed a fast BLAS library, because most of the time consuming stuff is done inside low-level routines for linear algebra.

On my machine the gensim.models.ldamodel.LdaMulticore can use up all the 20 cpu cores with workers=4 during training. Setting workers larger than this didn't speed up the training. One reason might be the corpus iterator is too slow to use LdaMulticore effectively.

You can try to use ShardedCorpus to serialize and replace the corpus, which should be much faster to read/write. Also, simply zipping your large .mm file so it takes up less space (=less I/O) may help too. E.g.,

mm = gensim.corpora.MmCorpus(bz2.BZ2File('enwiki-latest-pages-articles_tfidf.mm.bz2'))
lda = gensim.models.ldamulticore.LdaMulticore(corpus=mm, id2word=id2word, num_topics=100, workers=4)
Jon
  • 1,211
  • 13
  • 29
  • My problem was indeed because of an I/O bottleneck in loading the corpus. I imagine that using `ShardedCorpus` might help -- I'll try that next time. For me, simply pre-loading the whole corpus into memory first (machine has almost a 1 T ram), solved the problem. Pre-loading is *way* faster than loading docs on demand. I'll try your other suggestions next time! – Edward Newell Dec 18 '15 at 01:50
  • doesn't `corpora.MmCorpus('some_corpus.mm')` pre-loads the corpus in memory? i am stuck in this issue too, where the logger says `using serial LDA version on this node` and then nothing.. – Koustuv Sinha Jan 13 '17 at 15:17
  • 1
    Had a similar problem, `LdaMulticore` worked in one environment and not another, compared packages and found that removing scikit-learn (with llvm-openmp - probable source of issue) solved it. – InterwebIsGreat Dec 03 '19 at 12:25
  • This is old, but can confirm that conflicts were my issue as well. Creating a conda environment without scikitlearn solved the problem. – Sean Norton Dec 10 '19 at 19:54