0

Device: Rtx 3050ti laptop gpu, i7 12 gen cpu with 16 gb ram

usig this to run the code

yolo task=detect mode=train epochs=10 data=data_custom.yaml model=yolov8l.pt device=0

and getting the same error everytime

torch.cuda.OutOfMemoryError: CUDA out of memory. 
Tried to allocate 20.00 MiB (GPU 0; 3.80 GiB total capacity; 2.44 GiB already allocated; 23.38 MiB free; 2.47 GiB reserved in total by PyTorch) 
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 

to get rid of this error and run the code on the gpu itself

Shayan Daneshvar
  • 297
  • 1
  • 10

2 Answers2

1

You are using the Large pre-trained model, which requires big GPUs, and a good amount of RAM. Here are a few things you can try:

  1. Try reducing batch size. The default is 16, try 8 or lower.
  2. Try smaller models such as the Medium (m), Small (s), or Nano (n) models, as RTX 3050Ti is a low to mid-end GPU. (Go with S, then M)
  3. Try increasing the paging size, as you have 16GB of ram which might not be enough when loading the dataset into the memory.
  4. Train the model on a more powerful system, such as Google Colab.
  5. Train the model on the CPU.
Shayan Daneshvar
  • 297
  • 1
  • 10
0

just use batch=-1 and the yolo code will handle the mem allocation, worked for me !

  • Can you elaborate a little? – TheTridentGuy supports Ukraine Jun 05 '23 at 04:41
  • Batch size of `-1` indicates to the train method to determine the optimal batch size on its own. Usually, yolo v8 attempts to put as much images into a batch such that GPU memory is used ~65%. However, it still raises a `OutOfMemoryError` in my case. – 7shoe Aug 21 '23 at 19:14