I'm using spacy tokenizer to tokenize my data, and then build vocab.
This is my code:
import spacy
nlp = spacy.load("en_core_web_sm")
def build_vocab(docs, max_vocab=10000, min_freq=3):
stoi = {'<PAD>':0, '<UNK>':1}
itos = {0:'<PAD>', 1:'<UNK>'}
word_freq = {}
idx = 2
for sentence in docs:
for word in [i.text.lower() for i in nlp(sentence)]:
if word not in word_freq:
word_freq[word] = 1
else:
word_freq[word] += 1
if word_freq[word] == min_freq:
if len(stoi) < max_vocab:
stoi[word] = idx
itos[idx] = word
idx += 1
return stoi, itos
But it takes hours to complete since I have more than 800000 sentences.
Is there a faster and better way to achieve this? Thanks.
update: tried to remove min_freq:
def build_vocab(docs, max_vocab=10000):
stoi = {'<PAD>':0, '<UNK>':1}
itos = {0:'<PAD>', 1:'<UNK>'}
idx = 2
for sentence in docs:
for word in [i.text.lower() for i in nlp(sentence)]:
if word not in stoi:
if len(stoi) < max_vocab:
stoi[word] = idx
itos[idx] = word
idx += 1
return stoi, itos
still takes a long time, does spacy have a function to build vocab like in torchtext (.build_vocab).