1

How could I use the code below to go through a folder of documents and get each documents vector value, and then average the overall value?

documents_list = ['Hello, world','Here are two sentences.']
for doc in documents_list:
    doc_nlp = nlp(doc)
    print(doc_nlp.vector)
    for token in doc_nlp:
        print(token.text,token.vector)
emanuel tru
  • 111
  • 1
  • 3

2 Answers2

2

It seems like you are wanting to get average vectors on a sentence level, but your example is showing a token level vector representation.

Sentence level

Averaging sentence vectors could be done in the following way:

>>> import numpy as np
>>> np.array([nlp(doc).vector for doc in documents_list]).mean(axis=0)

This would return a single averaged vector for all sentences in documents_list

Token level

You could achieve the same on a token level by doing the following:

>>> [np.array([token.vector for token in nlp(doc)]).mean(axis=0) for doc in documents_list]

This will give you a list of averaged word vectors across tokens for each sentence. Basically a list of vectors of length len(documents_list)

Side note

As a side note, averaging vectors does not really preserve semantic structure as it implicitly makes the claim that the local context is equivalent to it's broader context. Concatenating might be a better choice in a smaller windowed context.

Make sure to test the results for your domain and task, it could work well for your task depending on your assumptions.

Nathan McCoy
  • 3,092
  • 1
  • 24
  • 46
0

I'm not sure what you mean by a document (I'm not familiar with spacy), but if you want the average, then you can just add each vector to a list, and then after the for loop, do:

avg = sum(vectors_list) / len(vectors_list)
Niayesh Isky
  • 1,100
  • 11
  • 17