Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
0
votes
1 answer

How to induce "uniform" sparsity/sparse coding in machine learning model?

I have a machine learning model (namely, an autoencoder) that attempts to learn a sparse representation of an input signal via a simple l1 penalty term added to the objective function. This indeed works to promote a sparse vector representation in…
0
votes
1 answer

Calculating Gradient Update

Lets say I want to manually calculate the gradient update with respect to the Kullback-Liebler divergence loss, say on a VAE (see an actual example from pytorch sample documentation here): KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) -…
Matt
  • 1,599
  • 3
  • 21
  • 33
0
votes
1 answer

I am trying to build a Convolutional AutoEncoder in tensorflow on MNIST. How do get the decoded image in same shape as of original one?

I have written the encoder and decoder functions using layers API. Both are 3 layers deep. def Enocder(real_img): with tf.variable_scope("encoder"): conv1 = tf.layers.conv2d(inputs=X, filters=32, kernel_size=[ …
0
votes
1 answer

Adding noise to genomic data having discrete values (A, G, T, C)

Since genomic sequences vary greatly in length, I have been trying to work on using denoising autoencoders to get a compact representation for any given sequence. My expected input is a sequence of nucleotides (letters - A, G, T, C), for example,…
Aman Dalmia
  • 356
  • 2
  • 10
0
votes
1 answer

Keras Error for CNN Output

I am getting an error when trying to create a CNN model in Keras to create a denoising autoencoder. My Keras backend is TensorFlow. My input data is a numpy array. The numpy array were taken from grayscale images. I split this using sklearn…
Eric Z
  • 1
  • 1
0
votes
1 answer

Deeplearning4j Autoencoder

I couldn't find any full example of an autoencoder in DL4J documentation. I see a good general description of Autoencoders here with a small piece of code for just the MultiLayerConfiguration, but the code is not full. Is there any full example…
me._
  • 51
  • 1
  • 8
0
votes
1 answer

Keras fit_generator using input and output image generators 'ndim' error

I decided to try my hand at training an auto-encoder for re-coloring grey scale images. This approach might be a tad naive, but I want to play with it and see how good (or bad) it works and examine how I can improve it. However, it unexpectedly…
Lafayette
  • 568
  • 4
  • 19
0
votes
0 answers

Vector representation of time series in Keras stateful LSTM autoencoder

I am implementing a LSTM autoencoder in Keras to get a vector representation of my time series data. The series I have are very long and so I am using stateful LSTMs. I create non-overlapping windows of each series and input them to the…
0
votes
1 answer

ValueError when training Autoencoder in Keras for unsupervised learning

I'm trying to use an autoencoder within Keras to do unsupervised classification of hyperspectral images using the Indian Pines dataset. I had started with a Project here https://github.com/KonstantinosF/Classification-of-Hyperspectral-Image and have…
Wes
  • 1,720
  • 3
  • 15
  • 26
0
votes
1 answer

How to use tf.layers.conv2d to train a autoencoder with tied weights

If I want to train an autoencoder with tied weights (encoder and decoder has same weight parameters), how to use tf.layers.conv2d to do that correctly? I cannot just simply share variables between corresponding conv2d layers of encoder and decoder,…
0
votes
1 answer

Keras: LSTM Seq2Seq autoencoder input incompability error

I'm trying to run the Seq2Seq example here, https://blog.keras.io/building-autoencoders-in-keras.html from keras.layers import Input, LSTM, RepeatVector from keras.models import Model inputs = Input(shape=(timesteps, input_dim)) encoded =…
patti_jane
  • 3,293
  • 5
  • 21
  • 26
0
votes
1 answer

How to reuse hidden layers from autoenocder for classification task in tensorflow

can someone explain with example that how to reuse hidden layers of autoenocder for classification task from neural networks, I want to use two layers of my autoencoder in my multi-layer perceptron model in tensorflow
0
votes
1 answer

Training Keras autoencoder without bottleneck does not return original data

I'm trying to make an autoencoder using Keras with a tensorflow backend. In particular, I have data of a vector of n_components (i.e. 200) sampled n_times (i.e. 20000). It is key that when I train time t, that I compare it only to time t. It appears…
arthur_s
  • 23
  • 3
0
votes
1 answer

autoencoder only provides linear output

Hello Guys, I am working right now on an Autoencoder reducing some simple 2D Data to 1D. The architecture is 2 - 10 - 1 - 10 - 2 Neurons/Layer. As Activation Function I use sigmoid in every layer but the output-layer, where I use the identity. I am…
Vallout
  • 50
  • 9
0
votes
2 answers

Differene between Autoencoder Network and Fully Convolution Network

what is the main difference between autoencoder networks and fully convolutional network? Please help me understand the difference between architecture of these two networks?
PURNENDU MISHRA
  • 423
  • 5
  • 13