Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
0
votes
1 answer

How to progressively grow a neural network in pytorch?

I am trying to make a progressive autoencoder and I have thought a couple ways of growing my network during training. However, I am always stuck on this one part where I don't know if changing the input(encoder) and output(decoder) channel would…
0
votes
1 answer

Simple keras autoencoder with MNIST sample data not working

I'm trying to implement a simple keras autoencoder in R using the MNIST sample dataset. I got my example from a blog but it doesn't work. I get almost a 0 % accuracy. The objective is to compress each 28 x 28 image (784 entries) into a vector of 32…
animalcroc
  • 283
  • 4
  • 13
0
votes
0 answers

Using an autoencoder to reduce dimensionality

Here is my version of an autoencoder written using PyTorch : import warnings warnings.filterwarnings('ignore') import numpy as np import matplotlib.pyplot as plt import pandas as pd from matplotlib import pyplot as plt from sklearn import…
blue-sky
  • 51,962
  • 152
  • 427
  • 752
0
votes
1 answer

Why doesn't the UpSampling2d Keras layer work?

I tried to build a convolutional autoencoder in keras but it doesn't seem to work properly. First of all, here's the Code: from keras.models import Sequential from keras.layers import Reshape from keras.layers import Flatten from keras.layers import…
Kay Jersch
  • 277
  • 3
  • 13
0
votes
1 answer

How to create an Autoencoder where the encoder/decoder weights are mirrored (transposed)

I am attempting to build my first Autoencoder neural net using TensorFlow. The dimensions of the layers in the encoder and decoder are the same, just reversed. The autoencoder learns to compress and reconstruct image data to a reasonable standard,…
KOB
  • 4,084
  • 9
  • 44
  • 88
0
votes
1 answer

Input 0 is incompatible with layer conv2d_transpose_1: expected ndim=4, found ndim=2

I am having trouble reshaping the layer before feeding it through deconvolution. I dont know how to reverse the flatten layer in convolution. Thanks for the help! def build_deep_autoencoder(img_shape, code_size): H,W,C = img_shape encoder =…
0
votes
0 answers

How to add regularization in CNN autoencoder model_Based on Keras

I am a freshman in Keras and deep learning, I am not quite sure the right way to add the regularization, I wrote a CNN autoencoder using the API model class, right now I add the regularizer in each of the "Conv2D" Keras function,I am not sure if…
J. Zhao
  • 231
  • 1
  • 2
  • 3
0
votes
1 answer

Negative dimension size caused by subtracting 3 from 2 for 'Encoder/conv6/Conv2D'

I am trying to implement an AutoEncoder in Tensorflow. I am a beginner in Python as well as StackOverflow. These two are my encoder and decoder.My train_data.shape is (42000,28,28,1) (mnist dataset). def Network(Input): with…
Sayantan Das
  • 189
  • 1
  • 16
0
votes
1 answer

Preprocessing and dropout in Autoencoders?

I am working with autoencoders and have few confusions, I am trying different autoencoders like : fully_connected autoencoder convolutional autoencoder denoising autoencoder I have two dataset , One is numerical dataset which have float and int…
Aaditya Ura
  • 12,007
  • 7
  • 50
  • 88
0
votes
1 answer

Error when adding Flatten layer to Sequential model

I have created, and trained, an autoencoder using Keras. After train this model I want to get only the encoder part, so i did some pop(). Later I created the Sequential() model, based on the remaining layers of my autoencoder model: model_seq =…
Helder
  • 482
  • 5
  • 18
0
votes
1 answer

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[28800,19200]

I posted a question about Auto Encoder (AutoEncoder). I installed the following program, but now, when I input an image of 160 horizontal pixels by 120 pixels, "ResourceExhaustedError" occurs and I can not proceed with learning. Specifically, Error…
oguririn
  • 7
  • 2
0
votes
2 answers

Tensorflow Convolutional Autoencoder

I've been trying to implement a convolutional autoencoder in Tensorflow similar to how it was done in Keras in this tutorial. So far this is what my code looks like filter1 = tf.Variable(tf.random_normal([3, 3, 1, 16])) filter2 =…
0
votes
1 answer

Incompatible shapes of 1 using auto encoder

I'm trying to use a auto-encoder on time series. When I use padding on the data all is working, but when I'm using variable data length I have small data shape issues: Incompatible shapes: [1,125,4] vs. [1,126,4] input_series = Input(shape=(None,…
Neabfi
  • 4,411
  • 3
  • 32
  • 42
0
votes
1 answer

Can I use the `tf.contrib.seq2seq.dynamic_decode` to replace the function `tf.nn.dynamic_rnn` in encoder-decoder framework?

Actually, I want to generate sequences just like the thing that Alex Grave's did. I have the implementation of tensorflow. At the same time, I want to try the attention-based seq2seq model to generate the handwriting. So about the decoder, I did it…
Lily.chen
  • 119
  • 1
  • 8
0
votes
1 answer

Connect custom input pipeline to tf model

I am currently trying to get a simple tensorflow model to train by data provided by a custom input pipeline. It should work as efficient as possible. Although I've read lots of tutorials, I can't get it to work. THE DATA I have my training data…
DocDriven
  • 3,726
  • 6
  • 24
  • 53