I am going to use CountVectorizer with a large corpus which I retrieve from Gutenberg (or any dat set from nltk) There are ebooks in tis corpus. I want to gather all sentences in those books in the same list. Something like that: listsentences=["SENTENCE#1" ,"SENTENCE#2" ,"SENTENCE#3" ...] I am stuck how to create sentence list. Any help is massively appreciated! This is how my code looks like:
from nltk.corpus import gutenberg
text=nltk.corpus.gutenberg.fileids()
gutenberg.fileids()
emma=gutenberg.sents()
vectorizer=CountVectorizer(min_df = 1, stop_words = 'english')
dtm= vectorizer.fit_transform(emma)
pd.DataFrame(dtm.toarray(),columns=vectorizer.get_feature_names()).head(10)
vectorizer.get_feature_names()
lsa = TruncatedSVD(3, algorithm = 'arpack')
dtm_lsa = lsa.fit_transform(dtm)
dtm_lsa = Normalizer(copy=False).fit_transform(dtm_lsa)