0

Getting following error when trying to excecute

%cd /content/DeepSpeech
!python3 DeepSpeech.py --train_cudnn True --early_stop True --es_epochs 6 --n_hidden 2048 --epochs 20 \
  --export_dir /content/models/ --checkpoint_dir /content/model_checkpoints/ \
  --train_files /content/train.csv --dev_files /content/dev.csv --test_files /content/test.csv \
  --learning_rate 0.0001 --train_batch_size 64 --test_batch_size 32 --dev_batch_size 32 --export_file_name 'ft_model' \
   --augment reverb[p=0.2,delay=50.0~30.0,decay=10.0:2.0~1.0] \
   --augment volume[p=0.2,dbfs=-10:-40] \
   --augment pitch[p=0.2,pitch=1~0.2] \
   --augment tempo[p=0.2,factor=1~0.5] 

tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found. (0) Internal: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 2048, 2048, 1, 798, 64, 2048] [[{{node tower_0/cudnn_lstm/CudnnRNNV3}}]] [[tower_0/gradients/tower_0/BiasAdd_2_grad/BiasAddGrad/_87]] (1) Internal: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 2048, 2048, 1, 798, 64, 2048] [[{{node tower_0/cudnn_lstm/CudnnRNNV3}}]] 0 successful operations. 0 derived errors ignored.

Danish Bansal
  • 608
  • 1
  • 7
  • 25

2 Answers2

0

If i try it as below it worked fine.

%cd /content/DeepSpeech
!python3 DeepSpeech.py --train_cudnn True --early_stop True --es_epochs 6 --n_hidden 2048 --epochs 20 \
  --export_dir /content/models/ --checkpoint_dir /content/model_checkpoints/ \
  --train_files /content/train.csv --dev_files /content/dev.csv --test_files /content/test.csv \
  --learning_rate 0.0001 --train_batch_size 64 --test_batch_size 32 --dev_batch_size 32 --export_file_name 'ft_model' \
  # --augment reverb[p=0.2,delay=50.0~30.0,decay=10.0:2.0~1.0] \
  # --augment volume[p=0.2,dbfs=-10:-40] \
  # --augment pitch[p=0.2,pitch=1~0.2] \
  # --augment tempo[p=0.2,factor=1~0.5]

Basically augment was doing something to break our training in between

Danish Bansal
  • 608
  • 1
  • 7
  • 25
0

Best guess here is that TensorFlow is running out of memory. The batch sizes for dev, test and train are quite large in both cases, but the augmentation requires additional memory. Try dropping the batch_size down and see whether training starts, then if it does, gradually increase.

Kathy Reid
  • 575
  • 4
  • 6