1

How to set itoken in text2vec for spliting Chinese sentence? The example is for English! There are exsited Chinese word separation package: jieba etc. However, I want to use text2vec to do text clustering and LDA model. In addtion, how to do text clustering?

library(text2vec)
library(data.table)
# 数据准备
#首先运用Setkey为数据设置唯一的“主键”,并划分为训练集和测试集。
data("movie_review")
setDT(movie_review)
setkey(movie_review, id)
set.seed(2016L)
all_ids=movie_review$id
train_ids=sample(all_ids, 4000)
test_ids=setdiff(all_ids, train_ids)
train=movie_review[J(train_ids)]
test=movie_review[J(test_ids)]
#文档向量化
#文档向量化是text2vec的主要步骤,创建词表(vocabulary)前需要设置itoken分词迭代    器,然后用create_vocabulary创建词表,形成语料文件,构建DTM矩阵。
prep_fun=tolower
#代表词语划分到什么程度
tok_fun=word_tokenizer
#步骤1.设置分词迭代器
it_train=itoken(train$review, preprocessor=prep_fun, tokenizer=tok_fun,             
ids=train$id, progressbar=FALSE)
#步骤2.分词#消除停用词
stop_words=c("i", "me", "my", "myself", "we", "our", "ours", "ourselves", "you", "your", "yours")
#分词函数
vocab=create_vocabulary(it_train, stopwords=stop_words)
#对低频词的修建
pruned_vocab=prune_vocabulary(vocab,     
term_count_min=10,doc_proportion_max=0.5, doc_proportion_min=0.001)
                          #词频,低于10个都删掉
Dmitriy Selivanov
  • 4,545
  • 1
  • 22
  • 38
cindy
  • 19
  • 2

1 Answers1

1

Process is the same as for other languages (see details at http://text2vec.org), but you need to provide tokenizer. I suggest to take a look to stringi::stri_split or tokenizers::tokenize_words(which uses stringi). For example:

stringi::stri_split_boundaries("首先运用Setkey为数据设置唯一的“主键, 并划分为训练集和测试集", type="word", skip_word_none=TRUE)
> [1] "首先"   "运用"   "Setkey" "为"     "数据"   "设置"   "唯一"   "的"     "主"     "键"     "并"     "划分为" "训练"   "集"     "和"     "测试"   "集"
Dmitriy Selivanov
  • 4,545
  • 1
  • 22
  • 38