Hi I'm trying to understand how scikit-learn works out the TFIDF score in the matrix: document 1, feature 6, "wine":
test_doc = ['The wine was lovely', 'The red was delightful',
'Terrible choice of wine', 'We had a bottle of red']
# Create vectorizer
vec = TfidfVectorizer(stop_words='english')
# Feature vector
tfidf = vec.fit_transform(test_doc)
feature_names = vec.get_feature_names()
feature_matrix = tfidf.todense()
['bottle', 'choice', 'delightful', 'lovely', 'red', 'terrible', 'wine']
[[ 0. 0. 0. 0.78528828 0. 0. 0.6191303 ]
[ 0. 0. 0.78528828 0. 0.6191303 0. 0. ]
[ 0. 0.61761437 0. 0. 0. 0.61761437 0.48693426]
[ 0.78528828 0. 0. 0. 0.6191303 0. 0. ]]
I was using the answer to a very similar question to calculate it for myself: How areTF-IDF calculated by the scikit-learn TfidfVectorizer However in their TFIDFVectorizer, norm=None.
As I'm using the default setting of norm=l2, how does this differ to norm=None and how can I calculate it for myself?