The function score_samples from sklearn.neighbors.kde.KernelDensity returns the log of the density. What is the advantage of that over returning the density it self?
I know that the logarithm makes sense for probabilities, which are between 0 and 1 (See this quenstion: Why use log-probability estimates in GaussianNB [scikit-learn]?) But why do you do the same for densities which are between 0 and infinity?
Is there a way to estimate log-density directly, or is it just the logarithm taken from the estimated density?