Recently, I'm working on some projects and obtain 30 positive samples and 30 negative samples. Each of them has 128 features (128 dimensional).
I used "LeaveOneOut" and "sklearn.linear_model.LogisticRegression" to classify these samples and obtained a satisfactory result (AUC 0.87). I told my friend the results and he asked how could I compute the parameters with only 60 samples, the dimension of the feature vectors is larger than the number of the samples.
Now I have the same question. I checked the source code of the toolkit and still have no idea about this question. Could someone help me with this question? Thanks!