-1

I have a quite strange error occuring when i run my topicmodel code. Basically I have a .csv file with user comments. I want to create a dtm with each comment being one document. i took a sample of 8k comments and used the following code on it:

> #LOAD LIBRARYS
> 
> library(tm)
> library(SnowballC)
> library(stringr)
> library(tictoc)
> tic()
> 
> #SET FILE LOCATION
> file_loc <- "C:/Users/Andreas/Desktop/first8k.csv"
> 
> #LOAD DOCUMENTS
> Database <- read.csv(file_loc, header = FALSE)
> require(tm)
> 
> #PROCEED
> Database <- Corpus(DataframeSource(Database))
> 
> Database <-tm_map(Database,content_transformer(tolower))
> 
> 
> Database <- tm_map(Database, removePunctuation)
> Database <- tm_map(Database, removeNumbers)
> Database <- tm_map(Database, removeWords, stopwords("english"))
> Database <- tm_map(Database, stripWhitespace)
> 
> 
> myStopwords <- c("some", "individual", "stop","words")
> Database <- tm_map(Database, removeWords, myStopwords)
> 
> Database <- tm_map(Database,stemDocument) 
> 
> 
> dtm <- DocumentTermMatrix(Database,          control=list(minDocFreq=2,minWordLength=2))
> 
> row_total = apply(dtm, 1, sum)
> dtm.new = dtm[row_total>0,]
> 
> removeSparseTerms( dtm, .99)
>
>>Outcome:DocumentTermMatrix (documents: 12753, terms: 194)
>Non-/sparse entries: 66261/2407821
>Sparsity           : 97%
>Maximal term length: 11
>Weighting          : term frequency (tf)
> 
> #TOPICMODELLING
> 
> library(topicmodels)
> 
>  
> 
> burnin <- 100
> iter <- 500
> thin <- 100
> seed <-list(200,5,500,3700,1666)
> nstart <- 5
> best <- TRUE
> 
>  
> k <- 12
> 
>
> ldaOut <-LDA(dtm.new,k, method="Gibbs", control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))
> 

So this one works just fine. If I take another sample of 8k comments, also csv file, same format etc. the following error occurs:

> library(tm)
> library(SnowballC)
> library(stringr)
> library(tictoc)
> tic()
> 
> #SET FILE LOCATION
> file_loc <- "C:/Users/Andreas/Desktop/try8k.csv"
> 
> #LOAD DOCUMENTS
> Database <- read.csv(file_loc, header = FALSE)
> require(tm)
> 
> #PROCEED
> Database <- Corpus(DataframeSource(Database))
> 
> Database <-tm_map(Database,content_transformer(tolower))
> 
> 
> Database <- tm_map(Database, removePunctuation)
> Database <- tm_map(Database, removeNumbers)
> Database <- tm_map(Database, removeWords, stopwords("english"))
> Database <- tm_map(Database, stripWhitespace)
> 
> 
> myStopwords <- c("some", "individual", "stop","words")
> Database <- tm_map(Database, removeWords, myStopwords)
> 
> Database <- tm_map(Database,stemDocument) 
> 
> dtm <- DocumentTermMatrix(Database,control=list(minDocFreq=2,minWordLength=2))
> 
> row_total = apply(dtm, 1, sum)
> dtm.new = dtm[row_total>0,]
> 
> removeSparseTerms( dtm, .99)
>
>>Outcome:DocumentTermMatrix (documents: 9875, terms: 0)
Non-/sparse entries: 0/0
Sparsity           : 100%
Maximal term length: 0
Weighting          : term frequency (tf)
> 
> #TOPICMODELLING
> 
> library(topicmodels)
> 
>  
> 
> burnin <- 100
> iter <- 500
> thin <- 100
> seed <-list(200,5,500,3700,1666)
> nstart <- 5
> best <- TRUE
> 
>  
> k <- 12
> 
> 
> ldaOut <-LDA(dtm.new,k, method="Gibbs", control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))

>Fehler in obj[[i]][[which.max(sapply(obj[[i]], logLik))]] :
>attempt to select less than one element in get1index

I guess something with the dtm is not woking since it says there are 9875 documents but no terms at all. But I have absolutely no clue why the codes works for one sample but not for the other one. Please tell me if i have done something wrong on the code or if you spot any other mistake.

Thanks in advance!

Andres
  • 1
  • 1

1 Answers1

-1

terms = 0 that's why you have prob

  • Thanks for your answer. But as i said, my 2 Databases are similar. So the second one also contains term of course. What I dont understand ist why R filters out those terms or doesnt notice them. Its the same preprocessing... – Andres Feb 23 '17 at 09:37