6

I understand that a process (parent) can be pinned to a core using sched_setaffinity and then the forked process inherits the affinity and would also be pinned on the same core. However, I don't want to keep them pinned together to the same core forever. Ideally, what I want is for them to stay together on the same CPU i.e. if parent is migrated by the OS scheduler, the child should follow the parent and get migrated to the same CPU as parent.

One possible way is to have a shared variable where parent updates its current CPU periodically. Then the child can look up this variable periodically and sched_setaffinity to migrate to same CPU as the parent. However, this looks a bit hacky and may involve periods where they are running on separate CPUs. Any better ways to achieve this?

sr01853
  • 6,043
  • 1
  • 19
  • 39
ngupta
  • 121
  • 1
  • 6
  • 2
    I'd think this'd be implementation defined, differing from OS to OS. Please add more information. – autistic Jan 23 '13 at 03:58
  • I don't think you can do that on Linux. But what do you mean by "running on the same core" here? Isn't "running on the same core" observationally equivalent to "never running at the same time"? – tmyklebu Jan 23 '13 at 04:30
  • tmyklebu: Firstly, yes I'm targeting Linux (and don't care about portability). Now, the application is such that parent sends a buffer to the child which does foo(buffer). So, they never really "run at the same time", however, if they are running on different cores then the overhead increases significantly (probably due to L2/L3 not being shared across cores). Thus, I want them to be always be on the same CPU. – ngupta Jan 23 '13 at 05:49
  • 1
    Will this help? http://stackoverflow.com/questions/9386229/how-can-i-ensure-that-a-process-runs-in-a-specific-physical-cpu-core-and-thread?lq=1 – NeonGlow Jan 23 '13 at 06:58
  • 3
    The by far simplest and most effective way of doing this is using a single process. Can you restructure your code to do that? – salezica Mar 29 '13 at 14:07
  • @ngupta The overhead really increases significantly? Did you verify that? Also, why can't you let both jobs run in the same process if you care so much about performance? – thejh Mar 29 '13 at 14:07
  • @thejh The whole project is to separate components (shared libraries) to run in separate process to achieve better resilience against crashes and vulnerabilities (if libjpeg crashes due to, say buffer overflow, the main application does not crash or is compromised). So, now that separate address space model works, I'm working on the performance part now. – ngupta Apr 19 '13 at 22:13

2 Answers2

1

Would it be possible to run the child in a thread rather than in its own process?

JackCColeman
  • 3,777
  • 1
  • 15
  • 21
  • The whole project is to run libraries as separate processes (in the name of security). Anyways, I'm no longer working on that project, so would be closing this question. Thanks you all for suggestions and help. – ngupta Aug 03 '13 at 02:30
0

Would Gang scheduling help? Both parent and child would then be co-scheduled.

lsk
  • 532
  • 4
  • 5
  • I wanted to keep parent and child on the same core since they pass buffers frequently and if they are running on different cores, the overhead increases significantly, compared to the case of them running together on the same core. Thus, even if they are Gang scheduled, this copying overhead would still be an issue. – ngupta Apr 19 '13 at 22:09