Questions tagged [autoencoder]

An autoencoder, autoassociator or Diabolo network is an artificial neural network used for learning efficient codings. As such, it is part of the dimensionality reduction algorithms.

The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use two or more layers, starting from the input data (for instance in a face recognition task this would be pixels in the photograph):

  • A number of hidden layers (usually with a smaller number of neurons), which will form the encoder.
  • A number of hidden layer leading to an output layer (usually progressively bigger until the last one, where each neuron has the same meaning as in the input layer), which will form the decoder.

If linear neurons are used, then the optimal solution to an auto-encoder is strongly related to PCA.

When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still serve learn useful features in this case.

Auto-encoders can also be used to learn overcomplete feature representations of data.

The "coding" is also known as the embedded space or latent space in dimensionality reduction where the encoder will be used to project and the decoder to reconstruct.

1553 questions
0
votes
1 answer

How to implement this in neural network

I am using autoencoder for unsupervised learning. I was thinking whether skipping one input [at testing] will effect output accuracy as my inputs are nominal and numerical both. Will it be able to maintain relation learned among the inputs and…
stan.steve
  • 111
  • 10
0
votes
0 answers

How to stack Autoencoder/ Create Deep Autoencoder with Theano class

I understand the concept behind Stacked/ Deep Autoencoders and therefore want to implement it with the following code of a single-layer de-noising Autoencoder. Theano also provides a tutorial for a Stacked Autoencoder but this is trained in a…
0
votes
1 answer

Implementing a saturating autoencoder in theano

I am trying to implement an autoencoder, using the regularization method described in this paper: "Saturating Auto-Encoders", Goroshin et. al., 2013 Essentially, this tries to minimize the difference between the output of the hidden layer, and the…
0
votes
1 answer

How can I speed up an autoencoder to use on text data written in python's theano package?

I'm new to theano and I'm trying to adapt the autoencoder script here to work on text data. This code uses the MNIST dataset as training data. This data is in the form of a numpy 2d array. My data is a csr sparse matrix of about 100,000 instances…
clurhur
  • 81
  • 6
0
votes
1 answer

Autoencoders: Papers and Books regarding algorithms for Training

Which are some of the famous research papers and/or books which concern Autoencoders and the various different training algorithms for Autoencoders? I'm talking about research papers and/or books which lay the foundation for the different training…
Sahil
  • 1,346
  • 1
  • 12
  • 17
0
votes
1 answer

Does Theano support variable split?

In my Theano program, I want to split the tensor matrix into two parts, with each of them making different contributions to the error function. Can anyone tell me whether automatic differentiation support this? For example, for a tensor matrix…
-1
votes
0 answers

Are the features of latent variables related to the accuracy in VAE?

I am the learner using deep learning. I employed the Variational Autoencoder (VAE) methodology to categorize three distinct forms of vibrational patterns. Within the latent space, clear boundaries of the latent variables corresponding to the three…
Shawn
  • 1
  • 1
-1
votes
1 answer

How to use AutoEncoder to evaluate feature importance and select features

I know an autoencoder (AE) can compress information and extract new features which represent the input data. I found a paper which used AE to evaluate the importance of every feature in the origin matrix. In the subsequent analysis, the research…
-1
votes
1 answer

Defining loss function for autoencoder in Tensorflow

I am trying to create autoencoder (CVAE) on similar lines like one given here: Use Conditional Variational Autoencoder for Regression (CVAE). However, in vae_loss() and in KL_loss(), different variables (l_sigma, mu) are used than what these…
ewr3243
  • 397
  • 3
  • 19
-1
votes
1 answer

why is super() used with in same class

class AnomalyDetector(Model): def __init__(self): super(AnomalyDetector, self).__init__() self.encoder = tf.keras.Sequential([ layers.Dense(64, activation="relu"), layers.Dense(32, activation="relu"), layers.Dense(16,…
-1
votes
2 answers

Is it possible to Implement a node of data structures in artificial Intelligence?

I am working on a project which have to do image predictions using artifical intelligence, this is the image, you can see that the nodes are attached with each other, and first encoding the image and then hidden layer and then decoding layer. My…
Ibrar
  • 35
  • 1
  • 5
-1
votes
1 answer

Feature normalization for anomaly detection model

I have a question on feature normalization/standardisation (scaling) for anomaly detection / novelty detection using autoencoders. Typically in ML problems, we split the train/test sets. Fit normal/standard scaler on train and use that to transform…
-1
votes
1 answer

Evaluating the performance of variational autoencoder on unlabeled data

I've designed a variational autoencoder (VAE) that clusters sequential time series data. To evaluate the performance of VAE on labeled data, First, I run KMeans on the raw data and compare the generated labels with the true labels using Adjusted…
-1
votes
1 answer

Why is my CAE architecture unable to learn anything despite a single colored repetitive noisy image?

This is my pytorch architecture, it takes colored 3x256x256 images as input class AutoEncoder(nn.Module): def __init__(self, channels : int, latent_dim : int): super().__init__() ifac = 2 …
-1
votes
1 answer

Autoencoder for dimesionality reduction in DL4J

I'm trying to write an autoencoder for dimensionality reduction in DL4J, but all the autoencoder examples I can find for DL4J are for outlier…
1 2 3
99
100