I was trying to classify a dataset using the following strategy:
- Leave one out cross validation
- KNN to classify (count the number of errors) for each "fold"
- Calculate final error
- repeat for k=[1,2,3,4,5,7,10,12,15,20]
Here's the code, for the fisheriris dataset:
load fisheriris
cur=meas;true_label=species;
for norm=0:2
feats=normalizamos(cur,norm); %this is just a function I use in my dataset
for normalization. norm=0 equals no normalization
norm=1 and norm=2 are two different normalizations
c=cvpartition(size(feats,1),'leaveout');
for k=[1,2,3,4,5,7,10,12,15,20]
clear n_erros
for i=1:c.NumTestSets
tr=c.training(i);te=c.test(i);
train_set=feats(tr,:);
test_set=feats(te,:);
train_class=true_label(tr);
test_class=true_label(te);
pred=knnclassify(test_set,train_set,train_class,k);
n_erros(i)=sum(~strcmp(pred,test_class));
end
err_rate=sum(n_erros)/sum(c.TestSize)
end
end
Since the results (for my dataset) showed strange incoherent values, I decided to write my own version of LOO, as follows:
for i=1:size(cur,1)
test_set=feats(i,:);
test_class=true_label(i);
if i==1
train_set=feats(i+1:end,:);
train_class=true_label(i+1:end);
else
train_set=[feats(1:i-1,:);feats(i+1:end,:)];
train_class=[true_label(1:i-1);true_label(i+1:end)];
end
pred=knnclassify(test_set,train_set,train_class,k);
n_erros(i)=sum(~strcmp(pred,test_class));
end
Assuming my version of the code is well written, I was hoping for the same, or at least similar results. Here are both outcomes:
Any idea why the results are so different? What version should I use? Now I'm thinking to rewrite the other tests I did (for 3-fold, 5-fold, etc...) just to be sure.
Thank you all