0

when I am trying to do Adaboosting in Colab for a kaggle competition, the project is here https://www.kaggle.com/c/competitive-data-science-predict-future-sales, it will definitely "runtime died, automatically restarting". Then, I googled some answers, one method is saying "you can type '!/opt/bin/nvidia-smi' before doing calculation. the output after doing it is:

Fri Sep 28 00:48:45 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   36C    P8    33W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                              

Thus I make sure I have all the memory, but no matter how I run my code:

adb_reg = AdaBoostRegressor(base_estimator=None, n_estimators=20, learning_rate=1.0, loss='linear', random_state=None)
adb_reg.fit(df[predictors],np.ravel(df[response]), sample_weight=None)  

I will get "runtime died...". After "enjoying" this kind of failure; can I say Colab is just a toy, and I need to find another way to do kaggle project? any suggestions?

Michael
  • 529
  • 1
  • 8
  • 22
  • Try sampling the data set to see if the problem is indeed crashing OOM. If so, there are many things you can try, e.g., reducing the batch size during training. – Bob Smith Sep 28 '18 at 15:40
  • Thank you a lot, I really wish someone can have a explict and detailed answer. This issue is really pretty annoying. – Michael Oct 07 '18 at 00:23

0 Answers0