I would like to ask you guys, I saw that max_norm=1 in a piece of code and I checked that he said that the maximum norm is 1. What does this mean? Why does the norm of the embedded vector exceed the limit, so it needs to be normalized.
Randomly initialize the user's vector,
self.users = nn.Embedding( n_users, dim, max_norm=1 )
max_norm (python:float, optional) – The maximum norm, if the norm of the embedding vector exceeds this limit, renormalization will be performed. There is also this, assuming n_users is specified, then the specified dim=10 is a 10-dimensional user, so why does this need a concept of dimension, is there any difference between different dimensions?