I practice on this tutorial, I would like that each client train a different architecture and different model, Is this possible?
1 Answers
TFF does support different clients having different model architectures.
However, the Federated Learning for Image Classification tutorial uses tff.learning.build_federated_averaging_process
which implements the Federated Averaging (McMahan et. al 2017) algorithm, defined as each client receiving the same architecture. This is accomplished in TFF by "mapping" (in the functional programming sense) the model to each client dataset to produce a new model, and then aggregating the result.
To achieve different clients having different architectures, a different federated learning algorithm would need to be implemented. There are couple (non-exhaustive) ways this could be expressed:
Implement an alternative to
ClientFedAvg
. This method applies a fixed model to the clients dataset. An alternate implementation could potentially create a different architecture per client.Create a replacement for
tff.learning.build_federated_averaging_process
that uses a different function signature, splitting out groups of clients that would receive different architectures. For example, currently FedAvg looks like:(<state@SERVER, data@CLIENTS> → <state@SERVER, metrics@SERVER>
this could be replaced with a method with signature:
(<state@SERVER, data1@CLIENTS, data2@CLIENTS, ...> → <state@SERVER, metrics@SERVER>
This would allow the function to internally
tff.federated_map()
different model architectures to different client datasets. This would likely only be useful in FL simulations or experimentation and research.
However, in federated learning there will be difficult questions around how to aggregate the models back on the server into a single global model. This probably needs to be designed out first.

- 2,911
- 15
- 23