I'm trying to build an autoencoder using Keras. My goal is to compress a 200-dim vectorial space into a 20-dim one.
For some reason, whenever I train the autoencoder, it always ends up not using some of the elements of it's compression. For example, after the last training, elements 7, 12 and 15 of the encoded version are set to 0 for all inputs.
Apparently, however, my autoencoder it's working, which means that I'm able to compress and decompress my space with little loss. But still I don't understand why it is happening, I thought that all elements would have been used and I also assume that by doing this it would also increase accuracy.
Here is the code I use to build and train the autoencoder:
orig_length = 200
encoding_dim = 20
input_vec = Input(shape=(orig_length,))
encoded = Dense(150, activation='relu')(input_vec)
encoded = Dense(100, activation='relu')(encoded)
encoded = Dense(50, activation='relu')(encoded)
encoded = Dense(encoding_dim, activation='relu')(encoded)
decoded = Dense(50, activation='relu')(encoded)
decoded = Dense(100, activation='relu')(decoded)
decoded = Dense(150, activation='relu')(decoded)
decoded = Dense(orig_length, activation='linear')(decoded)
autoencoder = Model(input_vec, decoded)
encoder = Model(input_vec, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-4](encoded_input)
decoder_layer = autoencoder.layers[-3](decoder_layer)
decoder_layer = autoencoder.layers[-2](decoder_layer)
decoder_layer = autoencoder.layers[-1](decoder_layer)
decoder = Model(encoded_input, decoder_layer)
autoencoder.compile(optimizer='adam', loss=losses.mean_squared_error)
autoencoder.fit(input_arr, input_arr, batch_size=256, epochs=100)