Do nothing. Really, do nothing.
If your purpose here is to spread your resources as efficiently as possible, the proper thing to do is leave the operating system to move the processes to relevant CPUs on demand and as necessary.
- If all the processes are demanding CPU at the same time, the operating system will migrate them in such a way they will run on different processors anyway.
- If one of the processes spend most of their time idle, its probably better to leave it sharing a CPU with other processes.
For the case of performance optimization, limiting CPU resources or usage is never a good idea -- you'll only make performance worse. You only want to restrict CPU resources or CPUs that can be used where you are deliberately attempting to cripple the process.
Times where you might want to cripple a process would be:
- You are a hosting provider and are offering a minimum/maximum banding to what resources are available for a resource consumer.
- The process is very badly written/misbehaves and will improperly consume all resources on a given system bringing down other processes. Doing this would make your load shoot up but 'save' the system from unnecessarily consuming CPU that could be used for other things. Youd generally seek to actually fix the program in this case as a proper fix.
So - do nothing and let the operating system sort it out. After a while of your instances running (and if they really are CPU heavy) you can run the command ps -Lo psr,pid,tid $(pgrep <processname>)
and you'll see that each resource is being divvied up correctly on the CPU.
If you want to determine if each process is getting its fair share, and how much you really are utilizing in each program you do the following replacing the process name and get the following results:
$ ps -Lo psr,pid,tid,etime,cputime,comm $(pgrep firefox)
PSR PID TID ELAPSED TIME COMMAND
2 3400 3400 1-07:16:10 01:22:29 firefox
2 3400 3425 1-07:16:10 00:00:00 gdbus
2 3400 3426 1-07:16:09 00:00:00 Gecko_IOThread
3 3400 3427 1-07:16:09 00:00:00 Link Monitor
1 3400 3428 1-07:16:09 00:02:50 Socket Thread
1 3400 3429 1-07:16:09 00:00:00 firefox
0 3400 3430 1-07:16:09 00:00:25 JS Helper
3 3400 3431 1-07:16:09 00:00:26 JS Helper
3 3400 3432 1-07:16:09 00:00:25 JS Helper
1 3400 3433 1-07:16:09 00:00:25 JS Helper
1 3400 3434 1-07:16:09 00:00:26 JS Helper
3 3400 3435 1-07:16:09 00:00:25 JS Helper
0 3400 3436 1-07:16:09 00:00:25 JS Helper
0 3400 3437 1-07:16:09 00:00:26 JS Helper
2 3400 3438 1-07:16:09 00:00:02 JS Watchdog
2 3400 3439 1-07:16:09 00:00:00 Hang Monitor
1 3400 3440 1-07:16:09 00:00:00 BgHangManager
3 3400 3441 1-07:16:09 00:00:32 Cache2 I/O
0 3400 3442 1-07:16:09 00:02:41 Timer
3 3400 3444 1-07:16:09 00:00:00 GMPThread
2 3400 3447 1-07:16:09 00:07:24 Compositor
0 3400 3448 1-07:16:09 00:01:08 ImageBridgeChil
3 3400 3449 1-07:16:09 00:00:31 ImgDecoder #1
1 3400 3450 1-07:16:09 00:00:32 ImgDecoder #2
3 3400 3451 1-07:16:09 00:00:31 ImgDecoder #3
2 3400 3452 1-07:16:09 00:00:00 ImageIO
2 3400 3453 1-07:16:09 00:04:07 SoftwareVsyncTh
0 3400 3454 1-07:16:08 00:00:00 firefox
2 3400 3455 1-07:16:08 00:00:00 Cert Verify
2 3400 3456 1-07:16:08 00:00:00 IPDL Background
0 3400 3457 1-07:16:08 00:00:37 DOM Worker
2 3400 3458 1-07:16:08 00:00:03 HTML5 Parser
2 3400 3462 1-07:16:07 00:00:01 mozStorage #1
1 3400 3463 1-07:16:07 00:00:00 Proxy R~olution
1 3400 3464 1-07:16:07 00:00:49 URL Classifier
2 3400 3466 1-07:16:07 00:00:02 mozStorage #2
0 3400 3467 1-07:16:07 00:00:00 gmain
3 3400 3468 1-07:16:07 00:00:00 Cache I/O
3 3400 3471 1-07:16:07 00:00:00 mozStorage #3
2 3400 3477 1-07:16:07 00:00:35 DOM Worker
2 3400 3479 1-07:16:07 00:00:00 mozStorage #4
0 3400 3482 1-07:16:07 00:00:00 localStorage DB
2 3400 3483 1-07:16:07 00:00:03 mozStorage #5
1 3400 3519 1-07:15:57 00:00:00 mozStorage #6
2 3400 3537 1-07:14:09 00:00:31 DOM Worker
0 3400 3562 1-07:08:35 00:00:00 mozStorage #7
0 3400 3587 1-06:59:39 00:00:00 threaded-ml
2 3400 3597 1-06:49:40 00:00:00 mozStorage #8
2 3400 7594 1-01:36:55 00:00:34 threaded-ml
3 3400 11679 10:48:07 00:00:00 firefox
2 3400 11684 10:48:07 00:00:00 typefind:sink
2 3400 11687 10:48:07 00:00:00 typefind:sink
1 3400 11689 10:48:07 00:00:00 typefind:sink
0 3400 11690 10:48:07 00:00:00 mpegaudioparse0
1 3400 11691 10:48:07 00:00:00 mpegaudioparse1
2 3400 11692 10:48:07 00:00:00 mpegaudioparse2
0 3400 11693 10:48:07 00:00:00 aqueue:src
1 3400 11694 10:48:07 00:00:00 aqueue:src
1 3400 11695 10:48:07 00:00:00 aqueue:src
2 3400 22770 05:38:46 00:00:00 firefox
3 3400 29803 10:17 00:00:00 DNS Res~er #226
3 3400 30018 01:28 00:00:00 DNS Res~er #228
In this example I've used firefox on my machine but you can change the process name to suit your needs.
Here, I'm requesting each thread that lives in the process. The columns mean the following:
- PSR is the processor number assigned to that task.
- PID is the process id.
- TID is the thread id. (the main processes tid equals its pid)
- ELAPSED provides the total amount of time the process has been runnable for. Basically, the amount of time its been started for.
- TIME is the total amount amount of time the process has actually ran on a CPU.
- COMMAND is the command name as declared by the process. Here you can see each actual thread is given a particular name, presumably being used to describe its purpose.
To determine a proceses utilization during its lifetime as a percentage you could perform the following calculation (there I'm using firefox):
TIME / ELAPSED * 100 = UTIL
112570 / 4949 * 100 = 4.40
Note: The actual process ID (the main starting thread whose tid == pid) acts as a 'container' for the cumalative total of all threads (existing or no longer existing) CPU times, so gives you a reasonably accurate depiction of a processes entire usage.
To explain, if a processes lifetime equals its cpu time it means that for all the time the process has ever lived, its demanded and recieved a CPU to run on. This would equal 100% of cpu utilization.
I'm almost certain in reality you're going to find that your processes are going to be using hardly any CPU.
So - to reiterate, to perform as efficiently as possible do nothing as your kernel knows how to best prioritize CPU resources to best utilize your system. Anything you could possibly add is in most cases is reducing your overall effectiveness.
Unless its your plan to actually cripple the processes in some way (and there are circumstances where you may actually intend to do that) you dont want to use taskset
, control groups
or LXC/Docker
to get the best performance possible.