In spherical k-means, all vectors are normalized, and distance measure is cosine dissimilarity.
In classic k-means we seek to minimize a Euclidean distance between the cluster center and the members of the cluster. The intuition behind this is that the radial distance from the cluster-center to the element location should "have sameness" or "be similar" for all elements of that cluster.
In spherical k-means the idea is to set the center of the cluster such that it makes both uniform and minimal the angle between components. The intuition is like looking at stars - the points should have consistent spacing between each other. That spacing is simpler to quantify as "cosine similarity", but it means there are no "milky-way" galaxies forming large bright swathes across the sky of the data.
Source: Difference between standard and spherical k-means algorithms