Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
0
votes
2 answers

Tensorflow gradients causing contractive autoencoder cost doesn't converge

To construct a contractive autoencoder, one uses an ordinary autoencoder with the cost function To implement this with the MNIST dataset, I defined the cost function using using tensorflow as def cost(X, X_prime): grad =…
Chester Cheng
  • 158
  • 1
  • 10
0
votes
1 answer

From 2D to 3D using convolutional autoencoder

I’d like to reconstruct 3D object from 2D images. For that, I try to use convolutional auto encoder. However, in which layer should I lift the dimensionality? I wrote a code below, however, it shows an error: “RuntimeError: invalid argument 2: size…
0
votes
1 answer

How to extract lower dimensional feature vectors from a denoising stacked autoencoder using python and tensorflow

The code below imports the MNIST data set and trains a stacked denoising autoencoder to corrupt, encode, then decode the data. Basically I want to use this as a non-linear dimensional reduction technique. How can I access the lower dimensional…
0
votes
1 answer

Learning python code

i use this code with keras for feature laerning and now i wantٍ do classification ,i dont know how add softmax layer to my auto encoder,please help me
pady
  • 15
  • 1
  • 5
0
votes
1 answer

Auto-encoder based unsupervised clustering

I am trying to cluster a dataset using an encoder and since I am new in this field I cant tell how to do it.My main issue is how to define the loss function since the dataset is unlabeled and up to know, what I have seen from bibliography they…
Adiaforos
  • 1
  • 1
0
votes
0 answers

Implementation of Sparse autoencoder by tensorflow

I am trying to implement simple autoencoder like below. The number of input features are 2, and I want to build sparse autoencoder for dimension reduction to feature 1. I selected the number of nodes are 2(input), 8(hidden), 1(reduced feature),…
z991
  • 713
  • 1
  • 9
  • 21
0
votes
1 answer

Trying to adapt tflearn code, shape error

i'm trying to adapt this simple autoencoder code: https://github.com/tflearn/tflearn/blob/master/examples/images/autoencoder.py . I'm trying to change the code in a way that it uses convolutional layers and have a input of 488 images * 30 height *…
0
votes
0 answers

Sampling data from normal distribution in VAE

I am recently reading about Variational Autoencoder. In this method, z is sampled from normal distribution. I found some existing code like below. eps = srng.normal((self.L, mu.shape[0], self.n_latent)) # Reparametrize z = mu + T.exp(0.5…
jef
  • 3,890
  • 10
  • 42
  • 76
0
votes
0 answers

Confused about Autoencoder behavior in Keras?

I trained an autoencoder in Keras and saved the model as two separate models: The encoder and the decoder. I successfully load these, and then recreate the whole autoencoder with the following: ae_v = decoder(encoder(ae_in)) autoencoder =…
0
votes
1 answer

keras-tensorflow CAE dimension mismatch

I'm basically following this guide to build convolutional autoencoder with tensorflow backend. The main difference to the guide is that my data is 257x257 grayscale images. The following code: TRAIN_FOLDER = 'data/OIRDS_gray/' EPOCHS = 10 SHAPE =…
jfp
  • 73
  • 7
0
votes
1 answer

how to properly get weights and biases from the model into the stacked autoencoder?

https://github.com/cmgreen210/TensorFlowDeepAutoencoder i'm trying to save and restore model after the fine tuning step i tried to restore the model then get the variable from the model and it gave me this error, ValueError: Variable…
0
votes
1 answer

Simple denoising autoencoder for 1D data in Matlab

I'm trying to set up a simple denoising autoencoder with Matlab for 1D data. As currently there is no specialised input layer for 1D data the imageInputLayer() function has to be used: function net = DenoisingAutoencoder(data) [N, n] =…
0
votes
0 answers

Accuracy decreases during fir neural network with Keras

I have this function that implements a neural network: def create(self, size_1l, size_2l, size_3l=0, size_4l=0): """ This function build the denoising autoencoder using the parameters as size, the first is the input layer, the…
Francesco Scala
  • 67
  • 1
  • 10
0
votes
0 answers

stacked autoencoder for sign language recognition using custom database in tensorflow

I am trying to reconstruct the input images of my database using stacked autoencoder in tensorflow. If i use mnist database then i can reconstruct input image correctly. But when i apply my own database then i can't reconstruct input images…
Rifat
  • 21
  • 5
0
votes
1 answer

How does the gradient of the sum trick work to get maxpooling positions in keras?

The keras examples directory contains a lightweight version of a stacked what-where autoencoder (SWWAE) which they train on MNIST data. (https://github.com/fchollet/keras/blob/master/examples/mnist_swwae.py) In the original SWWAE paper, the authors…
Avedis
  • 443
  • 3
  • 13