On the Nvidia GPU, we can have multiple kernels running concurrently by using the Streams. How about the Xeon Phi? If I offload two part of computation code by different threads, will they run concurrently on the Xeon Phi?
Asked
Active
Viewed 439 times
2
-
I don't see anything prohibiting concurrent execution on a Phi. However, i'm not 100% sure. So... +1 – Michael Haidl Nov 08 '13 at 06:41
-
@kronos actually I tried it my self. Looks like you can run something concurrently. – Archeosudoerus Nov 08 '13 at 13:51
1 Answers
3
Yes you can have concurrent offload executions on the Xeon Phi, up to 64 by default.
See the --max-connections
parameter of the Coprocessor Offload Infrastructure (COI) daemon running on the Xeon Phi /bin/coi_daemon
:
--max-connections=<int> The maximum number of connections we allow from host
processes. If this is exceeded, new connections
are temporarily blocked. Defaults to 64.

damienfrancois
- 52,978
- 9
- 96
- 110
-
Thank you for the answer. I did some experiment yesterday. Do you know how the tasks are processed on the Phi? The task management on the Phi is more like a CPU or like GPU which has streams? I tried to have two tasks running on the Phi and both of them ask for the maximum number of threads on the Phi. They all run slowly. What will this affect the cache? – Archeosudoerus Nov 08 '13 at 15:44
-
1task management on the Phi is done by the Linux kernel running on the Xeon Phi (see more info [here](http://software.intel.com/en-us/articles/system-administration-for-the-intel-xeon-phi-coprocessor) ) So it is more like a CPU. If you have two tasks trying to use all the cores on the Phi, they will fight for resources and incur context switches, etc. – damienfrancois Nov 08 '13 at 15:52
-
-
Hi @damienfrancois . Here is another issue. If I have two host threads offloading to the Phi and each of them asks for 60 omp threads. How many actual threads there will be on the Phi? 60 or 120? Thank you. – Archeosudoerus Nov 12 '13 at 18:41