0

Using text2vec package in R -implemented LDA model,but iam wondering how to assign each documents to the topics

BELOW HERE is my code:

library(stringr)
library(rword2vec)
library(wordVectors)
#install.packages("text2vec")
library(text2vec)
library(data.table)
library(magrittr)

prep_fun = function(x) {
  x %>% 
    # make text lower case
    str_to_lower %>% 
    # remove non-alphanumeric symbols
    str_replace_all("[^[:alpha:]]", " ") %>% 
    # collapse multiple spaces
    str_replace_all("\\s+", " ")
}
movie_review_train = prep_fun(movie_review_train)

tokens = movie_review_train[1:1000] %>% 
  tolower %>% 
  word_tokenizer
it = itoken(tokens, progressbar = FALSE)
v = create_vocabulary(it)
v
vectorizer = vocab_vectorizer(v)
t1 = Sys.time()
dtm_train = create_dtm(it, vectorizer)
print(difftime(Sys.time(), t1, units = 'sec'))
dim(dtm_train)
stop_words = c("i", "me", "my", "myself", "we", "our", "ours", "ourselves")
t1 = Sys.time()
v = create_vocabulary(it, stopwords = stop_words)
print(difftime(Sys.time(), t1, units = 'sec'))
pruned_vocab = prune_vocabulary(v, 
                                term_count_min = 10, 
                                doc_proportion_max = 0.5,
                                doc_proportion_min = 0.001)
vectorizer = vocab_vectorizer(pruned_vocab)
# create dtm_train with new pruned vocabulary vectorizer
t1 = Sys.time()
dtm_train  = create_dtm(it, vectorizer)
print(difftime(Sys.time(), t1, units = 'sec'))
dtm_train_l1_norm = normalize(dtm_train, "l1")
tfidf = TfIdf$new()
# fit model to train data and transform train data with fitted model
dtm_train_tfidf = fit_transform(dtm_train, tfidf)

dtm = transform(dtm_train_tfidf, tfidf)
lda_model <-LDA$new(n_topics = ntopics
                    ,doc_topic_prior = alphaprior
                    ,topic_word_prior = deltaprior
)
lda_model$get_top_words(n = 10, topic_number = c(1:5), lambda = 0.3)

After this I want to assign each document to the related topics. Iam getting list of terms below the topics but I dono how to map.

Camellia
  • 141
  • 14
manjari
  • 1
  • 1
  • How about official documentation http://text2vec.org/topic_modeling.html#example6 ? – Dmitriy Selivanov May 02 '18 at 15:11
  • Thanks for your reference,but in that too they have mapped the distance between the topics and frequency of the terms in each topics.I want to assign each document to the topics. – manjari May 03 '18 at 02:19

1 Answers1

1

The document-topic distribution, doc_topic_distr, projects each document to topic space, which can be calculated from the below code based on documentation by Dmitriy Selivanov (please see http://text2vec.org/topic_modeling.html#example6).

In fact, two important outputs of topic model are topic-word matrix and document-topic matrix. The topic-word matrix or topic-word distribution shows words weights in each topic while the document-topic matrix or document-topic distribution demonstrates topics' contributions in each document.

doc_topic_distr = 
   lda_model$fit_transform(x = dtm, n_iter = 1000, 
                      convergence_tol = 0.001, n_check_convergence = 25, 
                      progressbar = FALSE)
Sam S.
  • 627
  • 1
  • 7
  • 23