Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
0
votes
1 answer

Caffe - training autoencoder with image data/image label pairs

I am very unfamiliar with Caffe. My task is to train an autoencoder net on image pairs, given in .tif format, where one is a grayscale image of nerves, and the other is the corresponding binary mask which shows if a certain structure is present on…
0
votes
1 answer

Getting dimensions wrong when creating a feed-forward auto-encoder in Theano/Lasagne

I want to create a simple autoencoder with 3000 input, 2 hidden and 3000 output neurons: def build_autoencoder(input_var=None): l_in = InputLayer(shape=(None,3000), input_var=input_var) l_hid = DenseLayer( l_in, num_units=2, …
0
votes
0 answers

Matlab - autoencoder for speech signal

I want to use the trainAutoencoder function from matlab to find the 30 main patterns of 300 speech signals. I tried to use this function, and use plotWeigths to see the patterns (weigths) but it seems that this is only for pictures and not for…
0
votes
0 answers

Visualize weights of stacked Autoencoder

I trained a stacked Autoencoder. The dimension of weight of first hidden layer is ( SizeHiddenLayer1 x SizeInputLayer) so visualization of this weight is simple because the size of input data and columns of this weight are the same. but the…
Ali
  • 3
  • 2
0
votes
1 answer

Unable to do parameter sharing in Torch between [sub]networks

I am trying to share the parameters between the encoder/decoder sub-networks of an architecture with another encoder/decoder in a different architecture. This is necessary for my problem since at the test time it requires a lot of computation (and…
Amir
  • 10,600
  • 9
  • 48
  • 75
0
votes
0 answers

Pre-existing feature extractor for images

We want to build an image classifier that should classify an image into one of the ~15 classes. We do have a large labelled training corpus. So, we can go in for training a deep neural network using Caffe or some other deep learning library. Another…
0
votes
1 answer

Auto-Encoders to classify images?

I am currently a student and I am developing a project of a Neural Network to classify a dataset of images. Since this images are not labeled I would need a unsupervised method of learning. It has been suggested to me I should use Auto-Encoders, is…
0
votes
2 answers

deep autoencoder training, small data vs. big data

I am training a deep autoencoder (for now 5 layers encoding and 5 layers decoding, using leaky ReLu) to reduce the dimensionality of the data from about 2000 dims to 2. I can train my model on 10k data, and the outcome is acceptable. The problem…
Mos
  • 11
  • 2
0
votes
1 answer

Instead LBFGS, using gradient descent in sparse autoencoder

In Andrew Ng's lecture notes, they use LBFGS and get some hidden features. Can I use gradient descent instead and produce the same hidden features? All the other parameters are the same, just change the optimization algorithm. Because When I use…
0
votes
2 answers

How to separate autoencoder into encoder and decoder (TensorFlow + TFLearn)

I have been writing simple autoencoder using tflearn. net = tflearn.input_data (shape=[None, train.shape [1]]) net = tflearn.fully_connected (net, 500, activation = 'tanh', regularizer = None, name = 'fc_en_1') #hidden state net =…
0
votes
1 answer

Already trained HMM model for word recognition

I've implemented a phoneme classifier using an autoencoder (Given an audio file array it returns all the recognized phonemes). I want to extend this project so that word recognition is possible. Does there exist an already trained HMM model (in…
fxhh
  • 47
  • 9
0
votes
1 answer

Keras: Wrong Number of Training Epochs

I'm trying to build a class to quickly initialize and train an autoencoder for rapid prototyping. One thing I'd like to be able to do is quickly adjust the number of epochs I train for. However, it seems like no matter what I do, the model trains…
0
votes
0 answers

how to extract (plot) the hidden units/softmax (features) from a autoencoder using Tensorflow

I'm new with ML, and I'm using tensorflow. I want to see the features of my autoencoder, but I don't know how to extract(see) the hidden units. could someone help me? I made my own dataset, but the original code is with…
Sn0w
  • 1
  • 2
0
votes
1 answer

Why the Mean absolute error not going down in fully feed forward network after doing VAE?

I am trying to build a prediction model, initially I did Variational Autoencoder and reduced the features from 2100 to 64. Now having (5000 X 64) samples for training and (2000 X 64) for testing with that I tried to build a Fully feed forward or MLP…
Vinod Prime
  • 371
  • 1
  • 3
  • 13
0
votes
1 answer

Is it necessary to normalize the data again after extracting the data from Autoencoder?

I used Autoencoder for pre-training the data, for which I normalize the input data and pass into Autoencoder. As a result autoencoder will end up in reducing the number of features. Now I want to use the output of autoencoder for a prediction task.…
1 2 3
99
100