In the word2vec model, there are two linear transforms that take a word in vocab space to a hidden layer (the "in" vector), and then back to the vocab space (the "out" vector). Usually this out vector is discarded after training. I'm wondering if there's an easy way of accessing the out vector in gensim python? Equivalently, how can I access the out matrix?
Motivation: I would like to implement the ideas presented in this recent paper: A Dual Embedding Space Model for Document Ranking
Here are more details. From the reference above we have the following word2vec model:
Here, the input layer is of size $V$, the vocabulary size, the hidden layer is of size $d$, and an output layer of size $V$. The two matrices are W_{IN} and W_{OUT}. Usually, the word2vec model keeps only the W_IN matrix. This is what is returned where, after training a word2vec model in gensim, you get stuff like:
model['potato']=[-0.2,0.5,2,...]
How can I access, or retain W_{OUT}? This is likely quite computationally expensive, and I'm really hoping for some built in methods in gensim to do this because I'm afraid that if I code this from scratch, it would not give good performance.