0

in my study, I am using the so-called Lee Carter Model (Mortality model) in which you can get the model parameters by using Singular Values Decomposition on the matrix of (log mortality rate- the average age-specific pattern of mortality). I am trying to find a substitution of Singular Value Decomposition, I saw that a good choice could be an autoencoding applied by a Recurrent Neural network. In fact, an SVD could be converging to autoencoder in which the activation function is a linear function. At this purpose, I would try using a nonlinear activation function in order to obtain the same items obtained by SVD with a nonlinear shape. Let's use this steps in order to obtain data: mortality rates for ages and years

rm(list = ls())

library(MortalitySmooth)

ages <- 0:100

years <- 1960:2009

D <- as.matrix(selectHMDdata("Japan", "Deaths",
                             "Females", ages,
                             years))

D[D==0] <- 1

E <- as.matrix(selectHMDdata("Japan", "Exposures",
                             "Females", ages,
                             years))

E[E==0] <- 1


lMX <- log(D/E)

alpha <- apply(lMX, 1, mean)`

cent.logMXMatrix <- sweep(lMX, 1, alpha)

Now we apply SVD on cent.logMXMatrix when I use SVD in R I get this:

SVD <- svd(cent.logMXMatrix)

and I need to get the components of SVD:

SVD$d
SVD$v
SVD$u 

I would like to get SVD component using Autoencoder...Is it possible? I would like to get your opinion, some suggestion from you and whether is possible I need a basic python code formulation for autoencoder on the "cent.logMXMatrix"

Thank a lot, Andrea

Dmitriy Fialkovskiy
  • 3,065
  • 8
  • 32
  • 47
an.dr.ea
  • 9
  • 4

1 Answers1

1

A one-layer autoencoder linearly maps a datapoint to a low-dimensional latent space, then applies a non-linear activation to project the result to the original space while minimizing a reconstruction error.
If we replace the non-linear activation by a linear one (identity) and use the L2 norm as a reconstruction error, you will be performing the same operation as an SVD.

# use keras with tensorflow backend
# This is a vanilla autoencoder with one hidden layer
from keras.layers import Input, Dense
from keras.models import Model

input_dim = Input(shape = (nfeat, )) # nfeat=the number of initial features
encoded1 = Dense(layer_size1, activation='linear')(input_dim) # layer_size1:size of your encoding layer
decoded1 = Dense(nfeat, activation='linear')
autoencoder = Model(inputs = input_dim, outputs = decoded1)
autoencoder.compile(loss='mean_squared_error', optimizer='adam')
aferjani
  • 115
  • 1
  • 1
  • 10