I would like to Fine-tune the pre-trained model with Federated Learning, So I do this:
def create_keras_model():
baseModel = tf.keras.models.load_model(path\to\model)
headModel = baseModel.output
model_output = tf.keras.layers.Dense(3)(headModel)
model = tf.keras.Model(inputs=baseModel.input, outputs=model_output)
for layer in baseModel.layers:
layer.trainable = False
return model
state = iterative_process.initialize()
keras_model = create_keras_model()
state = tff.learning.state_with_new_model_weights(
state,
trainable_weights=[v.numpy() for v in keras_model.trainable_weights],
non_trainable_weights=[
v.numpy() for v in keras_model.non_trainable_weights
])
evaluation = tff.learning.build_federated_evaluation(model_fn)
And here is the training loop :
for round_num in range(1, NUM_ROUNDS):
state, _ = iterative_process.next(state, train_data)
test_metrics = evaluation(state.model, test_data)
print(test_metrics))
The problem is that test accuracy still constant and does not increase after all round :
round 1, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.8680933)])
round 2, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.836558)])
round 3, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.82953715)])
round 4, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.82713753)])
round 5, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.82613766)])
round 6, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.8256878)])
round 7, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.82548285)])
round 8, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.825384)])
round 9, metrics=OrderedDict([('categorical_accuracy', 0.67105263), ('loss', 0.825332)])
I would like to understand the reason, If there is another way to do this? Knowing that my dataset is an image dataset with 3 class.