-1

I have read how imbalanced datasets can affect classification results majorly, but does using a transfer learning approach (like SSD) for object detection make sure that we dont need to balance the dataset to get good results ?

  • I think there is no general answer to your question. As far as my understanding goes: With transfer learning you use pre-set weights through the (partial) re-use of network layers. Depending on your strategies which weights to keep and which to forfeit, the pre-trained data will impact your results. This can lead to all sorts of results. Ideally you get the best of both worlds: Obtaining and keeping relevant pattern recognition from pre-training and train the additional (usually highlevel) patterns you need. So it might help but it won't make sure you never need to care about imbalance. – Gegenwind Nov 04 '17 at 08:18

1 Answers1

0

Short answer : NO. But, in fact, maybe.

These are unrelated ideas. Transfer learning goal is to help when you don't have training data or if you don't/can't start learning from scratch.

The real answer depends, so, on many things : how unbalanced are your dataset (a little or a lot ?), the kind of algorithm are you using (generative like a a bayesian ou discriminant like a SVM ?), how "separable" are the classes ? and maybe a lot more of other things.

With a bayesian classifier you usually can play with the prior probability of classes...

So the answer will surely be specific to the problem you're handling... You can't have a general answer to this question.

jmarcio
  • 86
  • 3