Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
8
votes
1 answer

Faster way to do multiple embeddings in PyTorch?

I'm working on a torch-based library for building autoencoders with tabular datasets. One big feature is learning embeddings for categorical features. In practice, however, training many embedding layers simultaneously is creating some slowdowns. I…
alliedtoasters
  • 420
  • 5
  • 13
8
votes
1 answer

Variationnal auto-encoder: implementing warm-up in Keras

I recently read this paper which introduces a process called "Warm-Up" (WU), which consists in multiplying the loss in the KL-divergence by a variable whose value depends on the number of epoch (it evolves linearly from 0 to 1) I was wondering if…
sbaur
  • 328
  • 4
  • 13
8
votes
1 answer

Autoencoder not learning identity function

I'm somewhat new to machine learning in general, and I wanted to make a simple experiment to get more familiar with neural network autoencoders: To make an extremely basic autoencoder that would learn the identity function. I'm using Keras to make…
7
votes
1 answer

How to deal with KerasTensor and Tensor?

I'm trying to create variational autoencoder and that means I need custom loss function. The problem is that inside loss function I have 2 different losses - mse and divergence. And mse is Tensor and divergence is KerasTensor ( because of dispersion…
7
votes
1 answer

How are the output size of MaxPooling2D, Conv2D, UpSampling2D layers calculated?

I'm learning about convolutional autoencoders and I am using keras to build a image denoiser. The following code works for building a model: denoiser.add(Conv2D(32, (3,3), input_shape=(28,28,1), padding='same'))…
Amp
  • 158
  • 1
  • 2
  • 7
7
votes
1 answer

How do I split an convolutional autoencoder?

I have compiled an autoencoder (full code is below), and after training it I would like to split it into two separate models: encoder (layers e1...encoded) and decoder (all other layers) in which to feed manually modified images that had been…
MegaNightdude
  • 161
  • 2
  • 8
7
votes
2 answers

Keras LSTM autoencoder with embedding layer

I am trying to build a text LSTM autoencoder in Keras. I want to use an embedding layer but I'am not sure how to implement this. The code looks like this. inputs = Input(shape=(timesteps, input_dim)) embedding_layer = Embedding(numfeats + 1, …
Alex_Gidiotis
  • 73
  • 1
  • 6
7
votes
4 answers

ValueError: Input 0 is incompatible with layer conv_1: expected ndim=3, found ndim=4

I am trying to make a variational auto encoder to learn to encode DNA sequences, but am getting an unexpected error. My data is an array of one-hot arrays. The issue I'm getting is a Value Error. It's telling me that I have a four dimensional…
Benjamin Lee
  • 481
  • 1
  • 5
  • 15
7
votes
1 answer

Tensorflow Autoencoder - How To Calculate Reconstruction Error?

I've implemented the following Autoencoder in Tensorflow as shown below. It basically takes MNIST digits as inputs, learns the structure of the data and reproduces the input at its output. from __future__ import division, print_function,…
Adam
  • 610
  • 1
  • 7
  • 21
7
votes
2 answers

Can I use autoencoder for clustering?

In the below code, they use autoencoder as supervised clustering or classification because they have data labels. http://amunategui.github.io/anomaly-detection-h2o/ But, can I use autoencoder to cluster data if I did not have its labels.? Regards
forever
  • 139
  • 1
  • 2
  • 8
7
votes
1 answer

Tensorflow autoencoder cost not decreasing?

I am working on unsupervised feature learning using autoencoders using Tensorflow. I have written following code for the Amazon csv dataset and when I am running it the cost is not decreasing at every iteration. Can you please help me find the bug…
7
votes
2 answers

How does pre-training improve classification in neural networks?

Many of the papers I have read so far have this mentioned "pre-training network could improve computational efficiency in terms of back-propagating errors", and could be achieved using RBMs or Autoencoders. If I have understood correctly,…
6
votes
1 answer

Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU

Following my previous question , I have written this code to train an autoencoder and then extract the features. (There might be some changes in the variable names) # Autoencoder…
Kadaj13
  • 1,423
  • 3
  • 17
  • 41
6
votes
3 answers

keras variational autoencoder loss function

I've read this blog by Keras on VAE implementation, where VAE loss is defined this way: def vae_loss(x, x_decoded_mean): xent_loss = objectives.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.mean(1 + z_log_sigma -…
pnaseri
  • 95
  • 2
  • 9
6
votes
1 answer

How can I build an LSTM AutoEncoder with PyTorch?

I have my data as a DataFrame: dOpen dHigh dLow dClose dVolume day_of_week_0 day_of_week_1 ... month_6 month_7 month_8 month_9 month_10 month_11 month_12 639 -0.002498 -0.000278 -0.005576 -0.002228 -0.002229 …
Shamoon
  • 41,293
  • 91
  • 306
  • 570