0

I checked all API and couldn't find a way to map vector to word no matter in word2Vec or glove. Google doesn't help that much.

Does anybody know to do this?

Background: I'm training a chatbot by using seq2seq model. But the implementations I found so far are using one-hot encoding. So I want to try use glove embedding and use the output mapping back to the word.

Bing Magic
  • 29
  • 5

1 Answers1

0

The outputs of your learning architecture will certainly not be identical to vectors of particular words. What you could try would be to find the distance between each output vector and all vocabulary words, and simply pick the word that minimizes that distance. Don't expect amazing results though; they are using one-hot encodings for a reason.

KonstantinosKokos
  • 3,369
  • 1
  • 11
  • 21