Scenario:
In closed-set face recognition, if we have 10 people in a Gallery set, then the query images will be from among these 10 people. Each query will thus be assigned to one of the 10 people.
In open-set face recognition, query faces may come from people outside the 10 persons in the Gallery. These extra people are knows as "distractors". An example task can be found in the IJB-A challenge.
Question:
Suppose I have an SVM (one versus all) trained for each of the 10 identities. How am I to report accuracy in the open-set scenario? If a query image X comes in, my model will ALWAYS identify it as one out of the 10 people in my Gallery, albeit with a low score if that person is not among the 10 in the Gallery. So when reporting accuracy as a %, every distractor query image will give me a 0 accuracy, bringing down the overall accuracy of labeling each query image with its correct identity.
Is this the correct way to report recognition accuracy on open-set protocol? Or is there a standard way to set a threshold on the classification score, and say that "a Query image X has low score for every identity in Gallery, thus we know it is a distractor image and we will not consider this when computing our recognition accuracy"
Lastly, a caveat: This is very specific to biometrics and face recognition in particular. However, SO provides the most coherent answers, and also highly likely to find biometrics people active in the Vision and Image Processing tags at SO, which is why I am asking this here.