0

I am training my neural network with a large corpus of text data. First I converted every text(words) into vector using glove. Those vectors are the input to the neural network. Now, I have an output vector from the output layer of the network. How do I convert that output vector back to natural language?

code to map word to vector:

def load_glove(dimen):
  mapping = {}
  with open(("./data/glove/glove.6B/glove.6B." + str(dimen) + "d.txt")) as f:
      for line in f:    
          li = line.split()
          mapping[li[0]] = map(float, li[1:])    
    return mapping

One possibility may be to use cosine similarity. We have one vector and need to find the similarity in vector space with cosine angle. Can tensorflow help me here?

bot_xxx
  • 1
  • 1
  • 1
    Can you share the code you've used so far to generate the vectors? – Sal Sep 27 '18 at 20:20
  • Added in the edited question. – bot_xxx Sep 27 '18 at 20:34
  • I actually didn't generate the vector. Glove has the vector for corresponding word. – bot_xxx Sep 27 '18 at 20:35
  • There's not enough info about the output layer of your network to know how it should be interpreted. What has your network been trained to do? In classic word2vec, the network is trained to predict words via an output layer that is either one-word-per-node (in negative-sampling mode), or one-word-per-small-set-of-nodes-chosen-by-a-compact-encoding (in hierarchical-softmax mode). It's not clear if your network has been set up to be like those, or some other way. – gojomo Sep 28 '18 at 06:16

0 Answers0