0

I'm using TPU through Google Colab and GCP, and want to dump XLA IR. But I have seen the xla doc in github xla index, and it only shows the way while the backend is CPU or GPU.

I have tried using XLA_FLAGS="--xla_dump_hlo_as_text --xla_dump_to=/content/iir/" TF_XLA_FLAGS=--tf_xla_cpu_global_jit to run a CPU-targeted program and get dumped hlo file. I have also tried capture_tpu_file and can only get ir for each operator in 'op_profile' page. So is there a way to dump XLA IR for the whole program when the backend is TPU?

Thank you!

Jay

JaySun
  • 1

1 Answers1

1

Unfortunately there isn't a way to dump/access the XLA IR on Cloud TPUs at the moment, since the XLA_FLAGS need to be set on the TPU server.

Allen Wang
  • 281
  • 1
  • 4
  • So we can't dump HLO IR when the target is TPU, even before any optimizations? If that's true, can we use functions in [libtpu_client.c](https://github.com/tensorflow/tensorflow/blob/r2.2/tensorflow/compiler/xla/python/tpu_driver/client/libtpu_client.c) to run HLO IR dumped when the backend is CPU or GPU? – JaySun Aug 07 '20 at 06:20