7

Many of the papers I have read so far have this mentioned "pre-training network could improve computational efficiency in terms of back-propagating errors", and could be achieved using RBMs or Autoencoders.

  1. If I have understood correctly, AutoEncoders work by learning the identity function, and if it has hidden units less than the size of input data, then it also does compression, BUT what does this even have anything to do with improving computational efficiency in propagating error signal backwards? Is it because the weights of the pre trained hidden units does not diverge much from its initial values?

  2. Assuming data scientists who are reading this would by theirselves know already that AutoEncoders take inputs as target values since they are learning identity function, which is regarded as unsupervised learning, but can such method be applied to Convolutional Neural Networks for which the first hidden layer is feature map? Each feature map is created by convolving a learned kernel with a receptive field in the image. This learned kernel, how could this be obtained by pre-training (unsupervised fashion)?

honk
  • 9,137
  • 11
  • 75
  • 83
VM_AI
  • 1,132
  • 4
  • 13
  • 25

2 Answers2

11

One thing to note is that autoencoders try to learn the non-trivial identify function, not the identify function itself. Otherwise they wouldn't have been useful at all. Well the pre-training helps moving the weight vectors towards a good starting point on the error surface. Then the backpropagation algorithm, which is basically doing gradient descent, is used improve upon those weights. Note that gradient descent gets stuck in the closes local minima.

enter image description here

[Ignore the term Global Minima in the image posted and think of it as another, better, local minima]

Intuitively speaking, suppose you are looking for an optimal path to get from origin A to destination B. Having a map with no routes shown on it (the errors you obtain at the last layer of the neural network model) kind of tells you where to to go. But you may put yourself in a route which has a lot of obstacles, up hills and down hills. Then suppose someone tells you about a route a a direction he has gone through before (the pre-training) and hands you a new map (the pre=training phase's starting point).

This could be an intuitive reason on why starting with random weights and immediately start to optimize the model with backpropagation may not necessarily help you achieve the performance you obtain with a pre-trained model. However, note that many models achieving state-of-the-art results do not use pre-training necessarily and they may use the backpropagation in combination with other optimization methods (e.g. adagrad, RMSProp, Momentum and ...) to hopefully avoid getting stuck in a bad local minima.

enter image description here

Here's the source for the second image.

Amir
  • 10,600
  • 9
  • 48
  • 75
  • Thanks for the lucid explanation. – VM_AI Dec 30 '15 at 22:06
  • @VM_AI You're welcome. If you have a lot of data you may use new optimization techniques and you will probability not need to do any pre-training on the model. – Amir Jan 01 '16 at 00:03
0

I don't know a lot about autoencoder theory, but I've done a bit of work with RBMs. What RBMs do is they predict what the probability is of seeing the specific type of data in order to get the weights initialized to the right ball park- it is considered an (unsupervised) probabilistic model, so you don't correct using the known labels. Basically, the idea here is that having a learning rate that is too big will never lead to convergence but having one that is too small will take forever to train. Thus, by "pretraining" in this way you find out the ball park of the weights and then can set the learning rate to be small in order to get them down to the optimal values.

As for the second question, no, you don't generally prelearn kernels, at least not in an unsupervised fashion. I suspect that what is meant by pretraining here is a bit different than in your first question- this is to say, that what is happening is that they are taking a pretrained model (say from model zoo) and fine tuning it with a new set of data.

Which model you use generally depends on the type of data you have and the task at hand. Convnets I've found to train faster and efficiently, but not all data has meaning when convolved, in which case dbns may be the way to go. Unless say, you have a small amount of data then I'd use something other than neural networks entirely.

Anyways, I hope this helps clear some of your questions.

ABC
  • 665
  • 1
  • 6
  • 15