4

I am running a jupyter notebook (3.6) and I have 20 slaves enabled on startup using

"env":{"JUPYTERQ_SERVERARGS":"s-20"}

If I check this in the notebook it looks all good

\s
20i

However when I run a parallel process, e.g.,

\t:100 {sqrt (200000?x) xexp 1.7} peach 10?1.0

I can see that all slaves use the same cpu.

If I run the same command in a q session started from the command line using the same q binary that I specify in my runkernel .py it distributes the slaves across all available cpus.

Does anyone have an idea why jupyterlab q session would only use 1 cpu?

EDIT: Thanks Callum and Terry for pointing me to taskset. So, initially the taskset was set to mask 8000000. I changed that

system "taskset -cp 30-40 ",(string .z.i)
"pid 193048's current affinity list: 27"
"pid 193048's new affinity list: 30-40"

and re-ran the process above. Now all the tasks run still on cpu 28 (that was the cpu used before, despite the mask setting it to 27). I also tried to set the affinity for the jupyter lab itself but that doesn't have any effect on this either.

chrise
  • 4,039
  • 3
  • 39
  • 74
  • 2
    Are you using `taskset` when starting up either the kdb jupyterlab processes? Using `taskset` is generally how you ensure a kdb process works over many/specific cpus – terrylynch Sep 02 '19 at 07:33
  • 1
    If your running on linux, then you can use the following in a jupyter notebook to check the taskset easily q)system "taskset -p ",string .z.i. If you are running on windows you can use something like the process monitor and manually find the jupyter lab – Callum Biggs Sep 02 '19 at 08:53
  • chrise, did you resolve this issue in the end? – Callum Biggs Sep 18 '19 at 10:09

1 Answers1

0

Try specifying your server args differently, they get parsed out as

SERVERARGS:getenv`JUPYTERQ_SERVERARGS

So try specifying them as follows

"env":{"JUPYTERQ_SERVERARGS":"-s 20"}
Callum Biggs
  • 1,539
  • 5
  • 13