0

Say, a multi-threading app runs on a 8-core Solaris. Is there a way to list the mapping between each thread and core #?

Thanks,

CCNA
  • 388
  • 7
  • 17
  • What research have you done? What web queries? What documentation have your read? – Gray Feb 28 '13 at 15:10
  • 1
    I just have to ask: Why do you want this? – Martin James Feb 28 '13 at 15:10
  • Regardless of why, I'm interested in the answer. – wcm Feb 28 '13 at 15:58
  • 1
    Unless you have specifically bound particular processes to particular cores (in which case you should already know this information), by the time you actually print and interpret the mapping, it's quite likely the thread has migrated (several times, even, on busy systems)... – twalberg Feb 28 '13 at 16:55
  • The term you should look for is called CPU affinity, that should help you with a websearch. I don't think there is a fixed default mapping though, but this depends on the actual threading system underneath. – Ulrich Eckhardt Feb 28 '13 at 16:55
  • Why? This is for a performance analysis. One app spawns a bunch of threads which are doing the same thing. The developer claims on a multi-core (like 64-core) hardware this will demonstrate a different behavior that a two-core one. I doubt it since I thought most threads are bound to a same core by default. So, I need to find a way to prove this... – CCNA Feb 28 '13 at 20:08
  • 1
    You might be confusing two concepts - binding and affinity (as mentioned by @doomster). Binding means basically "this thread will only ever run on this one (or set of) CPU", where affinity means "when this thread is ready to run again, try to put it back on the last CPU it ran on or one nearby (same chip but maybe different core) to try to take advantage of cache effects". Binding is usually only done when explicitly asked for, while the kernel may default to trying to maintain affinities. Doesn't help the two terms are often interchanged... – twalberg Feb 28 '13 at 20:43

1 Answers1

1

First off, you can write C code to inquire after each thread in a process Open /proc/[id]/lwp/[tid]/lwpsinfo there and fetch it into a lwpsinfo_t struct defined in procfs.h

processorid_t pr_onpro;         /* processor which last ran this lwp */
processorid_t pr_bindpro;       /* processor to which lwp is bound */

Are the two members of interest to you. Before you waste your time

Next before you waste a lot of time (assuming zones):

prctl -i zone {ZONENAME} run by root in the global zone only. You get output like this:

NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
zone.max-swap
        system          16.0EB    max   deny                                 -
zone.max-locked-memory
        system          16.0EB    max   deny                                 -
zone.max-shm-memory
        system          16.0EB    max   deny                                 -
zone.max-shm-ids
        system          16.8M     max   deny                                 -
zone.max-sem-ids
        system          16.8M     max   deny                                 -
zone.max-msg-ids
        system          16.8M     max   deny                                 -
zone.max-lwps
        system          2.15G     max   deny                                 -
zone.cpu-cap
        privileged      1.20K       -   deny                                 -
        system          4.29G     inf   deny                                 -
zone.cpu-shares
        privileged          1       -   none                                 -
        system          65.5K     max   none                                 -

zone.cpu-cap 1.20K means 1200 == means percent. So 1200 means 12 cpus. If I were your admin there is no way a DEV or TEST zone would have 64 cores. So check it first.

Plus, it seems your assumptions are wrong. Unless affinity (processor sets, etc.) is enabled the system assigns cpus to threads using the currently enabled scheduling algorithm (FSS, etc.) This means any thread can go to any available cpu at any time depending on the scheduler and load.

jim mcnamara
  • 16,005
  • 2
  • 34
  • 51