I have run the LDA using topic models package on my training data. How can I determine the perplexity of the fitted model? I read the instruction, but I am not sure which code I should use.
Here's what I have so far:
burnin <- 500
iter <- 1000
#keep <- 30
k <- 4
results_training <- LDA(dtm_training, k,
method = "Gibbs",
control = list(burnin = burnin,
iter = iter))
Terms <- terms(results_training, 10)
Topic <- topics(results_training, 4)
# Get the posterior probability for each document over each topic
posterior <- posterior(results_training)[[2]]
It works perfectly, but now my question is how I can use perplexity on the testing data (results_testing)? And how can I interpret the result of the perplexity?
Thanks