can someone explain with example that how to reuse hidden layers of autoenocder for classification task from neural networks, I want to use two layers of my autoencoder in my multi-layer perceptron model in tensorflow
Asked
Active
Viewed 283 times
1 Answers
0
- Once you have trained the auto enocder then hidden representation can be used for classification
- After training the autoencoder you freeze the weights of your autoencoder model
- Now you make a forward pass from input layer to the hidden layer whose output is the hidden representation
- The output of the hidden representation can be used as input to any normal classifier like SVM or may be another neural network like MLP
- Now You can use only one layer of your autoencoder for classsification
- If you are using two layer of the autoenocder (Which I have not seen anyone doing so I think its a bad idea) then you have to concatenate the results of both the layer and now the concatenated version will be input to another classifier like SVM
- If you have doubts in coding this then show me your code and i Will tell you in code what are the further steps required

Jai
- 3,211
- 2
- 17
- 26