I have developed a TensorFlow model on Cloud ML Engine with scaleTier: BASIC
.
Running its trainer experimentally on a GPU with scaleTier: BASIC_GPU
works fine. But an attempt of running it on a TPU with scaleTier: BASIC_TPU
produces this error message:
type.googleapis.com/google.rpc.QuotaFailure
The request for 1 TPU_V2 accelerators exceeds the allowed maximum
of 30 K80, 30 P100.
Where does this limitation come from and can it be lifted e.g. by enabling another API or increasing my initial budget?