0

I am trying to train a YOLOv5 neural network for recognizing vehicles. However, when it is trained on Google Colab, it always stops at here:

train: Scanning 'MyDataset/train/labels.cache' for images and labels... 26559 found, 0 missing, 0 empty, 0 corrupted: 100% 26559/26559 [00:00<?, ?it/s]
train: Caching images (8.5GB): 62% 16425/26559 [00:46<00:30, 330.41it/s]C
CPU times: user 850 ms, sys: 162 ms, total: 1.01 s
Wall time: 1min 26s

I followed the tutorial from roboflow. When I switched to the smaller database provided by roboflow, the training was able to proceed. I'm a Colab Pro+ user, so it shouldn't be a matter of not having enough memory.

desertnaut
  • 57,590
  • 26
  • 140
  • 166

1 Answers1

0

I switched to a smaller dataset and now it loads without any problems.

train: Caching images (4.6GB): 100% 8853/8853 [00:18<00:00, 483.20it/s]

Then it started training smoothly. I think it is indeed a matter of too much data. However Colab is not giving me any indication of running out of memory.

desertnaut
  • 57,590
  • 26
  • 140
  • 166