As we all know the capability of BERT
model for word embedding, it is probably better than the word2vec
and any other models.
I want to create a model on BERT
word embedding to generate synonyms or similar words. The same like we do in the Gensim
Word2Vec
. I want to create method of Gensim model.most_similar()
into BERT word embedding.
I researched a lot about it, seems that it is possible to do that, but the problem is it is only showing the embeddings in the form of number, there is no way to get the actual word from it. Can anybody help me regarding this?