0

I am trying to train a transformer model using a TPU in Google Cloud by following the instructions in the official tutorial. Loading the data worked fine, and after running

t2t-trainer \
  --model=transformer \
  --hparams_set=transformer_tpu \
  --problem=translate_ende_wmt32k_packed \
  --train_steps=500000 \
  --eval_steps=3000 \
  --data_dir=$DATA_DIR \
  --output_dir=$OUT_DIR \
  --use_tpu=True \
  --cloud_tpu_name=$TPU_NAME

the training does start as expected, and the output may look somewhat like this:

I1118 14:48:18.978163 140580835792320 tpu_estimator.py:2307] global_step/sec: 15.2942                                                                                                                                                   [114/1944]
INFO:tensorflow:examples/sec: 978.827                                                                                             
I1118 14:48:18.978595 140580835792320 tpu_estimator.py:2308] examples/sec: 978.827                                                
INFO:tensorflow:Enqueue next (100) batch(es) of data to infeed.                                               
I1118 14:48:18.979720 140580835792320 tpu_estimator.py:600] Enqueue next (100) batch(es) of data to infeed.                       
INFO:tensorflow:Dequeue next (100) batch(es) of data from outfeed.                                                                
I1118 14:48:18.979935 140580835792320 tpu_estimator.py:604] Dequeue next (100) batch(es) of data from outfeed.
I1118 14:48:24.292932 140577566803712 transport.py:157] Attempting refresh to obtain initial access_token                         
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-8 in state READY, and health HEALTHY.                                         
W1118 14:48:24.353135 140577566803712 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-8 in state READY, and health HEALTHY.
INFO:tensorflow:loss = 1.8486812, step = 113800 (6.536 sec)                                                                       
I1118 14:48:25.512768 140580835792320 basic_session_run_hooks.py:260] loss = 1.8486812, step = 113800 (6.536 sec)                 
INFO:tensorflow:global_step/sec: 15.2986                                                                 
I1118 14:48:25.514695 140580835792320 tpu_estimator.py:2307] global_step/sec: 15.2986                                             
INFO:tensorflow:examples/sec: 979.11                                                                                              
I1118 14:48:25.515115 140580835792320 tpu_estimator.py:2308] examples/sec: 979.11                                
INFO:tensorflow:Enqueue next (100) batch(es) of data to infeed.                                                                   
I1118 14:48:25.516618 140580835792320 tpu_estimator.py:600] Enqueue next (100) batch(es) of data to infeed.                       
INFO:tensorflow:Dequeue next (100) batch(es) of data from outfeed.                                       
I1118 14:48:25.516829 140580835792320 tpu_estimator.py:604] Dequeue next (100) batch(es) of data from outfeed.                    
INFO:tensorflow:Outfeed finished for iteration (388, 47)                                                                          
I1118 14:48:28.761935 140577575196416 tpu_estimator.py:279] Outfeed finished for iteration (388, 47)       
INFO:tensorflow:loss = 1.5237397, step = 113900 (6.573 sec)                                                                       
I1118 14:48:32.086134 140580835792320 basic_session_run_hooks.py:260] loss = 1.5237397, step = 113900 (6.573 sec)

However, sometimes, and after a non-deterministic number of iterations (sometimes less than 25k, sometimes more than 400k, sometimes never), the training suddenly stops. There is no error message, but no more progress is made. In this case, I get the following output:

I1120 13:40:33.828651 140684764419520 tpu_estimator.py:2307] global_step/sec: 16.3988
INFO:tensorflow:examples/sec: 1049.52
I1120 13:40:33.829339 140684764419520 tpu_estimator.py:2308] examples/sec: 1049.52
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
I1120 13:40:33.830607 140684764419520 tpu_estimator.py:600] Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
I1120 13:40:33.830862 140684764419520 tpu_estimator.py:604] Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Outfeed finished for iteration (7, 0)
I1120 13:40:34.267921 140681504278272 tpu_estimator.py:279] Outfeed finished for iteration (7, 0)
I1120 13:40:39.989195 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:40:40.056418 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:41:10.124164 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:41:10.177670 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:41:40.259634 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:41:40.309398 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:42:10.377460 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health UNKNOWN.
W1120 13:42:10.431982 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health UNKNOWN.
I1120 13:42:40.508342 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:42:40.567739 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:43:10.638391 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:43:10.694900 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:43:40.763782 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:43:40.810777 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:44:10.889873 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:44:10.942733 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:44:41.011034 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:44:41.066553 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.

Note that the reported health was UNKNOWN once, which may or may not be related to this problem.

To resume training, I have to stop the process and run the training command again. It will then load the latest checkpoint and continue training, until it eventually stops again.

I am running the training command from within a tmux session, so this should not be caused by connection issues between me and Google Cloud. In fact, I can completely close all windows, and connect to the running training session from another PC.

I have seen the question TPU training freezes in the middle of training, but I am using a predefined model, and my bucket is defined in the same region (TPU in us-central1-a, storage bucket in us-central1).

Edit: In case this is relevant, I am currently in a free 1 month trial, that I got by applying to the TensorFlow Research Cloud project. Maybe those cluster nodes are less stable than the payed ones?

Edit2: Maybe this is related to the GitHub issue TPU dies after 3hrs (e.g. with no 'health' state) (and the follow up)? Please note that the issue has been closed, but the given answer appears to be unrelated to the problem. Also, I've checked the file /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tpu/preempted_hook.py in my cloud VM, and both linked changes are already incorporated.

Christopher
  • 2,005
  • 3
  • 24
  • 50

2 Answers2

0

I had the same issue when training with TPU of TFRC. As the warning said, it seems there is a problem with the connection between TPU and Google Cloud even we follow the instructions.

I try a few solutions:

  • Remove gcloud config folder

    rm -rf ~/.config/gcloud

  • Update gcloud sdk:

    gcloud components update

  • Give TPU access to Cloud Bucket via IAM link!

The TPU hang errors still happen but with less frequency. Hopefully, it may help with your case or you may figure out the universal solution.

  • Thank's, I've tried this. But I am not sure whether it really helped - one experiment just died within 30 minutes of starting it. Hopefully there will be a universal solution soon. – Christopher Nov 20 '19 at 13:55
0

This was reported as a bug on GitHub (#1, #2) and subsequently fixed. If the error still occurs, you should reply to the second GitHub issue. Note, that you might have to recreate the TPU, just restarting it may not be enough.

Christopher
  • 2,005
  • 3
  • 24
  • 50