What does affinity='precomputed' mean in feature agglomeration dimensionality reduction (scikit-learn) and how is it used? I got much better results than by using other affinity options (such as 'euclidean', 'l1', 'l2' or 'manhattan'), however, I'm not sure what this 'precomputed' actually means and whether I have to provide something "precomputed" to the feature agglomeration algorithm? What does "precomputed" actually means?
I haven't passed anything other than original data preprocessed (scaled), numpy array. After fit_transform with feature agglomeration, result was passed to Birch clustering algorithm and I got much better results than with other affinities mentioned. Results are comparable with PCA, but with much lower memory consumption overhead, so I would use feature agglomeration as dimensionality reduction, but I'm concerned that I did it wrong?