0

I'm new on Federated Learning. I'm working on federated autoencoders. I have a AE model like above and trying to apply federated learning. However, I have no idea on how to transpose the input_shape problem (number of features). Once, when I split the dataset to clients, each client has a diferent number of features.

input_shape = df.shape[1]

def ae_model(input_dim):
    input_layer = Input(shape=(input_dim,))
    encoder = Dense(input_dim, activation='tanh', use_bias=True, name='Encoder_Layer')(input_layer)
    latent_layer = Dense(int(input_dim/2), activation='tanh', use_bias=True, name='Latent_Code_Layer')(encoder)
    dropout_layer = Dropout(0.25)(latent_layer)
    decoder = Dense(input_dim, activation='linear', use_bias=True, name='Decoder_Layer')(dropout_layer)
    ae = Model(input_layer, decoder)

    return ae

vanilla = ae_model(input_shape)

The model works well on non-federated environment. My question is what I need to change to fit it on federated. Any help count.

def create_keras_model():

   ** input_layer = tf.keras.Input(shape=(None,))**
    encoded = tf.keras.layers.Dense(input_dim, activation='tanh')(input_layer)
    latent_layer = tf.keras.layers.Dense(latent_code, activation='tanh')(encoded)
    # Decoder
    decoded = tf.keras.layers.Dense(input_dim, activation='linear')(latent_layer)

    model = tf.keras.Model(input_layer, decoded)
    return  model



def model_fn():
    keras_model = create_keras_model()
    return tff.learning.from_keras_model(
      keras_model,
      input_spec=keras_model.input_spec,
      loss=tf.keras.losses.SparseCategoricalCrossentropy(),
      metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])


trainner = tff.learning.algorithms.build_weighted_fed_avg(model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.Adam(learning_rate=0.0002),
    server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.0001))
dmaia
  • 1

0 Answers0