I am running a jupyter notebook (3.6) and I have 20 slaves enabled on startup using
"env":{"JUPYTERQ_SERVERARGS":"s-20"}
If I check this in the notebook it looks all good
\s
20i
However when I run a parallel process, e.g.,
\t:100 {sqrt (200000?x) xexp 1.7} peach 10?1.0
I can see that all slaves use the same cpu.
If I run the same command in a q session started from the command line using the same q binary that I specify in my runkernel .py it distributes the slaves across all available cpus.
Does anyone have an idea why jupyterlab q session would only use 1 cpu?
EDIT: Thanks Callum and Terry for pointing me to taskset. So, initially the taskset was set to mask 8000000. I changed that
system "taskset -cp 30-40 ",(string .z.i)
"pid 193048's current affinity list: 27"
"pid 193048's new affinity list: 30-40"
and re-ran the process above. Now all the tasks run still on cpu 28 (that was the cpu used before, despite the mask setting it to 27). I also tried to set the affinity for the jupyter lab itself but that doesn't have any effect on this either.