I have know that TensorFlow offer Distributed Training API that can train on multiple devices such as multiple GPUs, CPUs, TPUs, or multiple computers ( workers) Follow this doc : https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras
But I have a question is this any possible way to split the train using Data Parallelism to train across multiple machines ( include mobile devices and computer devices)?
I would be really grateful if you have any tutorial/instruction.