0
  1. I am trying to use federated learning framework flower with TensorFlow. My code seems to compile fine but It's not showing federated loss and accuracy. What am I doing wrong?

ServerSide Code :

import flwr as fl
import sys
import numpy as np

class SaveModelStrategy(fl.server.strategy.FedAvg):
    def aggregate_fit(
        self,
        rnd,
        results,
        failures
    ):
        aggregated_weights = super().aggregate_fit(rnd, results, failures)
        """if aggregated_weights is not None:
            # Save aggregated_weights
            print(f"Saving round {rnd} aggregated_weights...")
            np.savez(f"round-{rnd}-weights.npz", *aggregated_weights)"""
        return aggregated_weights

# Create strategy and run server
strategy = SaveModelStrategy()

# Start Flower server for three rounds of federated learning
fl.server.start_server(
        server_address = 'localhost:'+str(sys.argv[1]),
        #server_address = "[::]:8080" ,
        config={"num_rounds": 2} ,
        grpc_max_message_length = 1024*1024*1024,
        strategy = strategy
)

Server Side: enter image description here

Zachary Garrett
  • 2,911
  • 15
  • 23
Aurthur
  • 43
  • 5

1 Answers1

0

According to the source code of app.py, I realized that we can set force_final_distributed_eval = True. So we need to pass this to fl.server.start_server().

Not sure whether this is intended, but it solved my problem.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Aurthur
  • 43
  • 5