10

I'm using a multiclass classifier (a Support Vector Machine, via One-Vs-All) to classify data samples. Let's say I currently have n distinct classes.

However, in the scenario I'm facing, it is possible that a new data sample may belong to a new class n+1 that hasn't been seen before.

So I guess you can say that I need a form of Online Learning, as there is no distinct training set in the beginning that suits all data appearing later. Instead I need the SVM to adapt dynamically to new classes that may appear in the future.

So I'm wondering about if and how I can...

  1. identify that a new data sample does not quite fit into the existing classes but instead should result in creating a new class.

  2. integrate that new class into the existing classifier.

I can vaguely think of a few ideas that might be approaches to solve this problem:

  1. If none of the binary SVM classifiers (as I have one for each class in the OVA case) predicts a fairly high probability (e.g. > 0.5) for the new data sample, I could assume that this new data sample may represent a new class.

  2. I could train a new binary classifier for that new class and add it to the multiclass SVM.

However, these are just my naive thoughts. I'm wondering if there is some "proper" approach for this instead, e.g. using a Clustering algorithms to find all classes.

Or maybe my approach of trying to use an SVM for this is not even appropriate for this kind of problem?

Help on this is greatly appreciated.

Oliver
  • 279
  • 2
  • 8
  • Do you want to stick with using SVM? What is the range of classes that you are talking about? – Mido Dec 13 '15 at 20:42
  • - No, I don't necessarily need to stick with using SVM. I'm just using it as it seems to be the most common kernel based algorithm. - Not sure if I'm understanding your question about the range of classes correctly - the classes will probably be in a range of a dozen or two up to a hundred distinct classes or so. – Oliver Dec 14 '15 at 01:45
  • You have to take into consideration that you cannot train an SVM on a class with a single data point. This means that when you find a point that probably belongs to a new class, you'll have to wait till you get more points that are close to it before training a classifier for that class. The problem arises when you start getting two points that you can't classify where each of them belongs to a different class. – Mido Dec 14 '15 at 08:47
  • Okay, I see. So is there any proper / well-known approach to this kind of problem when it's not clear which classes you'll end up with in the end? – Oliver Dec 14 '15 at 21:57
  • Nothing that I know of. However, your approach seems fine but you'll have to find a measure for the similarity of the new unclassified points. If a group of them reaches a certain number, you can start building a classifier for that group. This is to overcome the problem I was telling you about. – Mido Dec 14 '15 at 22:18

1 Answers1

9

As in any other machine learning problem, if you do not have a quality criterion, you suck.

When people say "classification", they have supervised learning in mind: there is some ground truth against which you can train and check your algorithms. If new classes can appear, this ground truth is ambiguous. Imagine one class is "horse", and you see many horses: black horses, brown horses, even white ones. And suddenly you see a zebra. Whoa! Is it a new class or just an unusual horse? The answer will depend on how you are going to use your class labels. The SVM itself cannot decide, because SVM does not use these labels, it only produces them. The decision is up to a human (or to some decision-making algorithm which knows what is "good" and "bad", that is, has its own "loss function" or "utility function").

So you need a supervisor. But how can you assist this supervisor? Two options come to mind:

  1. Anomaly detection. This can help you with early occurences of new classes. After the very first zebra your algorithm sees it can raise an alarm: "There is something unusual!". For example, in sklearn various algorithms from random forest to one-class SVM can be used to detect unusial observations. Then your supervisor can look at them and decide whether they deserve to form an entirely new class.

  2. Clustering. It can help you to make decision about splitting your classes. For example, after the first zebra, you decided it is not worth making a new class. But over time, your algorithm has accumulated dozens of their images. So if you run a clustering algorithm on all the observations labeled as "horses", you might end up with two well-separated clusters. And it will be again up to the supervisor to decide, whether the striped horses should be detached from the plain ones into a new class.

If you want this decision to be purely authomatic, you can split classes if the ratio of within-cluster mean distance to between-cluster distance is low enough. But it will work well only if you have a good distance metric in the first place. And what is "good" is again defined by how you use your algorithms and what your ultimate goal is.

David Dale
  • 10,958
  • 44
  • 73