Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
0
votes
2 answers

Selection of activation function

I am making a AutoEncoder on Tensorflow which takes input as a 3 D Matrix whose value lie in the range of [-1,1]. What is the optimal activation function for this scenario? Also, what is the rule of thumb in selecting the activation function w.r.t…
a_parida
  • 606
  • 1
  • 7
  • 26
0
votes
1 answer

Autoencoder output does no have the correct shape

I created a class successively adding layers to an autoencoder, with objective to learn higher and higher features representation from the data. However, after training my algo and using the predict function, the output shape is not the correct…
0
votes
1 answer

Disappearing Dimensions in Multi-Output Keras Model

When I try to train the autoencoder described below, I receive an error that ' A target array with shape (256, 28, 28, 1) was passed for an output of shape (None, 0, 28, 1) while using as loss `binary_crossentropy. This loss expects targets to have…
0
votes
1 answer

How to create a channel sensitive loss function?

I'm working on a recursive auto-encoder. The neural network takes two 2D images each shaped (28,28,1) and combined to create an input of (28,28,2). They are encoded into a (28,28,1) shape, and decoded back into the original shape (28,28,2). Thus,…
Harrison Rose
  • 57
  • 1
  • 3
0
votes
1 answer

What is the optimal hidden units size?

Suppose we have a standard autoencoder with three layers (i.e. L1 is the input layer, L3 the output layer with #input = #output = 100 and L2 is the hidden layer (50 units)). I know the interesting part of an autoencoder is the hidden part L2.…
Jeremie
  • 405
  • 1
  • 7
  • 20
0
votes
1 answer

Is it necessary to use a linear bottleneck layer for autoencoder?

I'm currently trying to use an autoencoder network for dimensionality reduction. (i.e. using the bottleneck activation as the compressed feature) I noticed that a lot of studies that used autoencoder for this task uses a linear bottleneck layer. By…
whkang
  • 360
  • 2
  • 10
0
votes
0 answers

How to include tf.py_func code in Keras?

I am currently getting an error: TypeError: object of type 'NoneType' has no len() when compiling my code, and I am not sure where the error is coming from. Basically what I am trying to do is implement an Autoencoder in Keras that performs an…
0
votes
2 answers

Difference between 2 LSTM Autoencoders

I would like to know the difference between these 2 Models. the one above has 4 Layers looking into the model summary and you can also define the unit numbers for dimensionality reduction. But what is with the 2nd Model it has 3 layers and you cant…
annstudent93
  • 131
  • 2
  • 10
0
votes
1 answer

How does MATLAB AutoEncoder scale data?

I found in the documentation of AutroEcnoder that: Indicator to rescale the input data, specified as the comma-separated pair consisting of 'ScaleData' and either true or false. Autoencoders attempt to replicate their input at their output. For…
Eghbal
  • 3,892
  • 13
  • 51
  • 112
0
votes
1 answer

What proportions of data to feed an auto-encoder for abnormality detection on time series vibration data

As a noob on auto-encoders and deep learning, i struggle with the following. I am trying to use an auto-encoder to perform anormality detection, on a vibration dataset, starting out with a reference set from nasa Each data set consists of…
opprud
  • 169
  • 1
  • 1
  • 9
0
votes
1 answer

Autoencoder loss and accuracy on a simple binary data

I'm trying to understand and improve the loss and accuracy of the variational autoencoder. I filled the autoencoder with a simple binary data: data1 = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,…
MarioZ
  • 320
  • 4
  • 17
0
votes
0 answers

Can convolution layer of encodr and decoder be different in convolutional auto-encoder

for example,we have different filter size and the number of feature map,and the number of convolutional layer are also different, the hidden units are more than input units,the specific code is as follows. I don't know if this is called…
ohdoughnut
  • 75
  • 2
  • 2
  • 8
0
votes
0 answers

Fit a trained autoencoder with another training data set in keras

I have trained a denoising auotencoder with a training set df_noised_noy_norm_y in keras. I have another data set df_active and I made this autoencoder predict it's encoded representation. Now, I want to fine-tune the trained autoencoder with this…
Mari
  • 69
  • 1
  • 8
0
votes
1 answer

Multiple time series prediction with LSTM Autoencoder in Keras

I'm trying to build an LSTM autoencoder as shown here. My code: from keras.layers import Input, LSTM, RepeatVector from keras.models import Model inputs = Input(shape=(window_length, input_dim)) encoded = LSTM(latent_dim)(inputs) decoded =…
Alessandro
  • 742
  • 1
  • 10
  • 34
0
votes
1 answer

Prediction Error with stacked denoising autoencoder in keras

I trained a stacked denoising autoencoder with keras. Every thing was fine until it comes to predict new samples. The samples for prediction are named 'active' part, I did the necessary pre-processing and normalization to this part as I did to the…
Mari
  • 69
  • 1
  • 8