0

Using pytorch 1.0 Preview with fastai v1.0 in Colab.

I often get RuntimeError: DataLoader worker (pid 13) is killed by signal: Bus error. for more memory intensive tasks (nothing huge).

Looks like a shared memory issue: https://github.com/pytorch/pytorch/issues/5040#issue-294274594

Fix looks like it is to change shared memory in the docker container:

https://github.com/pytorch/pytorch/issues/2244#issuecomment-318864552

Looks like the shared memory of the docker container wasn't set high enough. Setting a higher amount by adding --shm-size 8G to the docker run command seems to be the trick as mentioned here.

How can I increase the shared memory of the docker container running in Colab or otherwise avoid this error?

jeffhale
  • 3,759
  • 7
  • 40
  • 56

1 Answers1

1

It's not possible to modify this setting in colab, but the default was raised to fix this issue already so you should not need to change the setting further: https://github.com/googlecolab/colabtools/issues/329

Ami F
  • 2,202
  • 11
  • 19