-1

I am the learner using deep learning. I employed the Variational Autoencoder (VAE) methodology to categorize three distinct forms of vibrational patterns. Within the latent space, clear boundaries of the latent variables corresponding to the three categories of vibrations are evident. Furthermore, the reconstruction of these patterns appears to be ok. However, the accuracy is approximately 70%. Specifically, the first figure is the latent space, with red scatters for vibration patten 1, blue for patten 2 and green for patten3. The second figure represents the accuracy result. The last three figures represent the reconstruction of three pattens.

Could someone with experience help me explain this problem?

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

Below is the code of VAE

input_layer = Input(shape=(input_dim,))
latent_layer = Dense(128, activation=LeakyReLU(alpha=0.1), kernel_initializer='RandomNormal')(input_layer)
# latent_layer = BatchNormalization()(latent_layer)
latent_layer = Dense(32, activation=LeakyReLU(alpha=0.1), kernel_initializer='RandomNormal')(latent_layer)
z_mean = Dense(latent_dim, activation=LeakyReLU(alpha=0.1), kernel_initializer='RandomNormal')(latent_layer)
z_log_var = Dense(latent_dim, activation=LeakyReLU(alpha=0.1), kernel_initializer='RandomNormal')(latent_layer)

latent_input = Input(shape=(latent_dim,), name='z_sampling')

def sampling(args):
    z_mean, z_log_var = args
    epsilon = random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0., stddev=0.005)
    return z_mean + K.exp(z_log_var) * epsilon

z = Lambda(sampling)([z_mean, z_log_var])

encoder = Model(input_layer, [z_mean, z_log_var, z], name='encoder')
encoder.summary()

aa = Dense(32, activation=LeakyReLU(alpha=0.1), kernel_initializer='RandomNormal')(latent_input)
# aa = BatchNormalization()(aa)
aa = Dense(128, activation=LeakyReLU(alpha=0.1), kernel_initializer='RandomNormal')(aa)
outputs = Dense(input_dim, activation=LeakyReLU(alpha=0.1), kernel_initializer='RandomNormal')(aa)

decoder = Model(latent_input, outputs, name='decoder')
decoder.summary()

output_layer = decoder(encoder(input_layer)[2])

vae = Model(input_layer, output_layer, name='VAE')
vae.summary()

# Compile the loss function
reconstruction_loss = losses.mean_squared_error(input_layer, output_layer)
reconstruction_loss *= input_dim
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
vae.add_loss(vae_loss)

# Compile the autoencoder model
vae.compile(optimizer='adam', metrics='accuracy')
history = vae.fit(train_images, train_images, epochs=800, batch_size=8, shuffle=True) 
Shawn
  • 1
  • 1
  • Please add more details to your question such as: .Description of each image .The details of your VAE model, e.g. what loss function you are using. The solution will be most likely in changing your loss function or providing better date and encoded support mapping, however without more information; I can't help. – Jamie Nicholl-Shelley Aug 28 '23 at 23:33
  • 1
    Many thanks for your suggestions. More details have been added. – Shawn Aug 29 '23 at 00:11

0 Answers0