I am using the Asynchronous Hyperband scheduler https://ray.readthedocs.io/en/latest/tune-schedulers.html?highlight=hyperband with 2 GPUs. My machine configuration has 2 GPUs and 12 GPUs. But still, only one trail runs at a time whereas 2 trials could simultaneously run at a time.
I specify
ray.init(num_gpus=torch.cuda.device_count())
"resources_per_trial": {
"cpu": 4,
"gpu": int(args.cuda)}