0

I am recently reading about Variational Autoencoder. In this method, z is sampled from normal distribution. I found some existing code like below.

    eps = srng.normal((self.L, mu.shape[0], self.n_latent))
    # Reparametrize
    z = mu + T.exp(0.5 * log_sigma) * eps

https://github.com/y0ast/Variational-Autoencoder/blob/master/VAE.py#L107

        mean, var = args
        epsilon = K.random_normal(K.shape(mean))
        return mean+var*epsilon

https://github.com/rarilurelo/keras-VAE/blob/master/probability_distributions.py#L30

But I am not sure how comes from the formula. I could imagine of using mu, but I have no idea about the second calculation. I assume it comes from variance of data. Could you describe more details about this?

jef
  • 3,890
  • 10
  • 42
  • 76
  • Apparently, gaussian N(0,1) is sampled - mean is 0 and std.dev is 1. After that, to get gaussian N(mean, stddev) you shift and scale: N(mean, stddev) = mean + stddev*N(0,1) – Severin Pappadeux Sep 29 '17 at 01:19
  • The first one is `T.exp(0.5 * log_sigma)` whereas the second one is `var`. So why does the first one is using such formula? – jef Sep 29 '17 at 13:33
  • Well, looks like in the first one we only get logarithm of sigma, so we have to exponentiate it to get sigma back – Severin Pappadeux Sep 29 '17 at 16:11
  • I don't have still no idea about first one. Is it common way to calculate the value? – jef Sep 29 '17 at 16:48
  • I quickly looked into paper guy from first repo was trying to reimplement: https://arxiv.org/abs/1312.6114. Looks like there is an estimate of `log(q)` using max.likelihood, and then it is used to sample from. Obviously, to sample gaussian is you have on hands estimate of log_sigma, you have to exponentiate it. Anyway, from quick look it looks reasonable – Severin Pappadeux Sep 29 '17 at 18:45

0 Answers0