Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
0
votes
1 answer

I am trying to run autoencoder_layers.py using keras on gpu but i get this error

autoencoder_layers.py github code import theano from keras import backend as K from keras.backend.theano_backend import _on_gpu from keras.layers.convolutional import Convolution2D, UpSampling2D from keras.layers.core import Dense, Layer from theano…
Hoda Fakharzadeh
  • 697
  • 3
  • 7
  • 18
0
votes
0 answers

high loss rate when training my model

I'm working on image denoising using autoencoders (working with keras tensorflow backend). when i train my model the loss rate is pretty high and stable(somewhere around 2.x). i can't understand what i'm doing wrong. here's my code: from…
gil
  • 2,388
  • 1
  • 21
  • 29
0
votes
1 answer

How to convert tensor to numpy array

I'm beginner of tensorflow. I made simple autoencoder with the help. I want to convert final decoded tensor to numpy array.I tried using .eval() but I could not work it. how can I convert tensor to numpy? My input image size is 512*512*1 and data…
0
votes
1 answer

Tensorflow matrix size error using my own data

I'm begginner of tensorflow. I wanted to use with my own medical raw image data and make simple autoencorder but I fail.I guess marix size is wrong. This is perhaps a noob question but I can't figure it out. my image data size is 512*512*1 and…
0
votes
1 answer

Dimension Reduction in CLDNN (tensorflow)

I'm trying to write an implementation of CLDNN with tensorflow, like the one in this scheme. I am having a problem with the dimension reduction layer. As far as I understand it, it is made with several stacked Restricted Boltzmann Machines (RBMs)…
Zelgunn
  • 85
  • 1
  • 5
0
votes
1 answer

Stacked Sparse Autoencoder parameters

I work on Stacked Sparse Autoencoders using MATLAB. Can anyone please suggest what values should be taken for Stacked Sparse Autoencoder parameters: L2 Weight Regularization ( Lambda) Sparsity Regularization…
0
votes
1 answer

Autoencoder - encoder vs decoder network size?

I've been reading up on autoencoders and all the examples I see mirror the encoder portion when building the decoder. encoder = [128, 64, 32, 16, 3] decoder = [3, 16, 32, 64, 128] Is this just by convention? Is there any specific reason the…
P-Rod
  • 471
  • 1
  • 5
  • 18
0
votes
1 answer

one hidden layer sufficient for auto-encoder to have output same as input

I am doing some work with Theano based auto-encoder, giving input as samples from mixture of Gaussians, one hidden layer. I expected output to be same as input, but I am not achieving it. I have been inspired by this tutorial for implemenation. Is…
Shyamkkhadka
  • 1,438
  • 4
  • 19
  • 29
0
votes
1 answer

Sparse autoencoder for Weka

I don't have much knowledge about it but there is a way to use a sparse autoencoder in Weka? At this time, I've just used MPLAutoencoder and don't have certain if I can configure it for sparsing too. Thank you.
Eduardo Andrade
  • 111
  • 1
  • 1
  • 13
0
votes
0 answers

Autoencoder - cost decreases but wrong output when more than one data example

I've recently implemented an autoencoder in numpy. I have checked all the gradients numerically and they seem correct, and the cost function also seems to decrease at each iteration, if the learning rate is sufficiently small. The problem: As you…
0
votes
1 answer

Unsupervised training of sparse autoencoders in matlab

I've tried to follow the example provided at mathworks for training a deep sparse autoencoder (4 layers), so i pre-trained the autoencoders separately and then stacked then into a deep network. When i try to finetune this network though, via the…
L.Thanos
  • 23
  • 6
0
votes
1 answer

Why don't use regularization term instead of sparsity term in autoencoder?

I have read this article about autoencoder, which is introduced by Andrew Ng. In there, he use sparity like regularization to drop connection but formular of sparsity is different from regur. So, I want to know why we don't use directly…
0
votes
1 answer

Errors out of memory running Matlab Autoencoders on 10^5 sparse matrix

I have a 10^5 square sparse matrix called pbAttack. Each element represents if there is connection between node i and node j. If there is connection, pbAttack(i,j) = 1. Otherwise, pbAttack (i,j) = 0. Then I want to use it following this tutorial:…
ARSN
  • 167
  • 1
  • 10
0
votes
1 answer

DeepLearning4J: Shapes do not match on FeedForward Auto Encoder

I'm implementing an auto-encoder for anomaly detection of IoT sensor data. My data set comes from a simulation, but basically it is accelerometer data - three dimensions, one for each axis. I'm reading it from a CSV file, column 2-4 contain the data…
Romeo Kienzler
  • 3,373
  • 3
  • 36
  • 58
0
votes
1 answer

My loss with fit_generator is 0.0000e+00 (using Keras)

I am trying to use Keras on a “large” dataset for my GPU. To do so, I make use of fit_generator, the problem is that my loss is 0.0000e+00 every time. My print class and generator function: class printbatch(callbacks.Callback): def…
sergi2k
  • 5
  • 4