0

I finished the installation, and wanted to try Training a CLIP-Fields (https://github.com/notmahi/clip-fields) directly by doing:

python train.py dataset_path=nyu.r3d

I amb monitoring both RAM and GPUram. I see that when the code starts, data starts to load on RAM:

 Loading data: 100%|████████████████████████████████████████████████████████| 757/757 [00:05<00:00, 135.05it/s]
Upscaling depth and conf: 100%|████████████████████████████████████████████| 757/757 [00:04<00:00, 157.72it/s]
Calculating global XYZs: 100%|██████████████████████████████████████████████| 757/757 [00:14<00:00, 51.72it/s] 

The previous code ocupies about 30GB of RAM. Then models such as Detic load on GPU, but when I arrive to line 177, /dataloaders/real_dataset.py my PC kills the process because of RAM OOM:

 # First, setup detic with the combined classes.
self._setup_detic_all_classes(view_data)

Why is data load on ram and not on GPU? Is there any way to lower the GB of memory used?

talonmies
  • 70,661
  • 34
  • 192
  • 269
Pep Bravo
  • 67
  • 1
  • 2
  • 9

0 Answers0