I have been practicing data analysis on Python and have been looking to do the same with R particularly sentiment analysis. With python to train a NB algorithm I could save it as a pickle and reuse to continue training it however I am unsure how I would do this with R. This is currently what I have followed to train and test a data set using the library e1071. After cleaning the data.
convert_count <- function(x) {
y <- ifelse(x > 0, 1,0)
y <- factor(y, levels=c(0,1), labels=c("No", "Yes"))
y
}
trainNB <- apply(dtm.train.nb, 2, convert_count)
testNB <- apply(dtm.test.nb, 2, convert_count)
system.time( classifier <- naiveBayes(trainNB, df.train$class, laplace = 1) )
system.time( pred <- predict(classifier, newdata=testNB) )
table("Predictions"= pred, "Actual" = df.test$class )
Can anyone explain to me what the Python pickle equivalent would be when using R? Another question I have is is using tm to clean the corpus and then using the document term matrix achieve bag of words?
Thanks