A traditional unsupervised learning approaches normally needs to assign number of clustering (K) before computing, but what if I do not know the exact number of K and exclude the k out of algorithm, I mean, Is there any unsupervised learning algorithm that do not need assign any k, so we can get k clustering automatically?
3 Answers
You could try to infer the amount of clusters by metrics such as Akaike information criterion, Bayes information criterion, using the Silhouette or the Elbow. I've also heard people talk about automatic clustering methods based on self-organizing maps (SOM), but you'd have to do your own research there.
In my experience it usually just boils down to exploring the data with manifold methods such as t-SNE and/or density based methods such as DBSCAN and then setting k either manually or with a suitable heuristic.

- 1,502
- 1
- 12
- 24
There is a hierarchical clustering in graph's theory. You can achieve clustering either bottom up or top down.
Bottom up
- define distance metric (Euclidean, Manhattan...)
- start with each point in its own cluster
- merge closest two clusters
There are three ways to select closest cluster:
- complete link -> two clusters with the smallest maximum pairwise distance
- single link -> two clusters with the smallest minimum pairwise distance
- average link -> average distance between all pairwise distances
Single linkage clustering can be solved with Kruskalov minimum spanning tree algorithm, however while easy to understand it runs in O(n^3). There is a variation of Prim's algorithm for MST which can solve this in O(nˇ2).
Top-down aka Divisive Analysis Start with all points in the same cluster and divide clusters at each iteration.
There are other clustering algorithms which you may google up, some already mentioned in other answers. I have not used others so i will leave that out.

- 5,189
- 2
- 38
- 62