-1

I understand mathematically that deep learning has more than one hidden layer, whereas regular machine learning has just one. Is that right? If so, why, and how is it better to have more than one layer, that give deep learning the edge over machine learning? I am asking for a specific use case of multi-label classification of texts. Do you think it is better to use DL or ML? I am using ML now and getting results about 99% for some categories, but 30% for others. Will DL be a viable alterative?

yishairasowsky
  • 741
  • 1
  • 7
  • 21

3 Answers3

1

Your understanding is not correct, regular machine learning is usually not associated with neural networks (which have layers), deep learning is just a branch of ML that deals with neural networks.

Problem with single layer networks (also known as perceptrons) is that they are unable to correctly classify tasks that are not linearly separable (like XOR problem) Similarly, more complex problems require deeper networks to achieve better results.

Ach113
  • 1,775
  • 3
  • 18
  • 40
0

Orthodox Machine Learning algorithm works on simpler mathematical models like SVM uses a line to separate classes, KNN uses distance from neighborhood. These doesn't need much of computation.

But Neural Nets or Deel Learning is a network of small perceptrons. This starts with random weights and matched output with expected outputs and during each round weights are updated to tune the model.

Now having a single layer, it is more prone to memorize weights and not to think. So instead multiple layers are used with dropouts so no matter what path it takes it gives a consistent output. So model would actually learn instead of memorising.

However, too many layers would degrade performances too. Goal is to achieve an optimum one.

0

First as others pointed out, classic ML is not limited to shallow neural network and choosing classic ML or deep learning depends on many things: the problem, the scale of the dataset at hands, the processing power available...

With regards to the question on the number of layers: shallow neural net (MLP) are supposed to be universal approximator and thus one can legitimately wonder why one need more than one hidden layer to target any problem. The issue is that finding the right set of weights that allows to approximate a specific function on a specific problem is super hard and current methods do not achieve it on shallow NN. Deep meural nets comes with many specific kind of layers and tricks to improve training (and which only works because of the depth of the model). Using these techniques allow to find weights that are going closer to the target than using classic shallow NN.

For more details, as suggested by @A.Maman, try to go cross-validation instance.

gdupont
  • 1,660
  • 18
  • 27