I have computed the vectors for two same sentences using XLNet embedding-as-service. But the model produces different vector embeddings for both the two same sentences hence the cosine similarity is not 1 and the Euclidean distances also not 0. in case of BERT its works fine. for example; if
vec1 = en.encode(texts=['he is anger'],pooling='reduce_mean')
vec2 = en.encode(texts=['he is anger'],pooling='reduce_mean')
the model (XLNet) is saying that these two sentences are dissimilar.