I'm forced to work with JupyterLab on a Linux-server which I do not host myself. The problem is that the Jupyter-process takes a lot of memory; this has been part in several bug reports like here and here.
Anyhow, as can be anticipated from the introduction, I do not have any sudo-rights and therefore cannot restart the lab myself (at least I think that this should not be possible for me).
Whats odd in my opinion can be seen on this Screenshot taken from HTOP:
The bash command which startet the lab has a lot of subprocesses which all look like kernels I opened and closed over the entire usage-time (the server is up for a month and I opened and closed a lot of kernels; none was running when taking the picture).
Since every of these processes ends on .json
, I assume that these might be some still intact runtime parameters. All processes on the third level look identical like on the screenshot, nothing else there.
Anyhow, I don't want to resolve the memory-spilling-bug. My question is rather straight forward:
Since no kernel is running: Can I just kill all of the processes on the third level and free the memory by doing so, or might this crash the lab?
Not crashing the lab is essential since I have no possibility to restart the lab.