2

Maybe I'm overlooking something, but I can't figure out how to get current_process.cpu_percent(interval=0.1) for all processes at once without iterating over them. Currently iteration will take process_count * interval seconds to finish. Is there a way around this?

So far I'm doing:

#!/usr/bin/env python2
import psutil

cpu_count = psutil.cpu_count()

processes_info = []
for proc in psutil.process_iter():
    try:
        pinfo = proc.as_dict(attrs=['pid', 'username', 'memory_info', 'cpu_percent', 'name'])

        current_process = psutil.Process(pid=pinfo['pid'])

        pinfo["cpu_info"] = current_process.cpu_percent(interval=0.1)
        processes_info.append(pinfo)

    except psutil.NoSuchProcess:
        pass

print processes_info

As far as I understand it, I cannot include cpu-percent into the attr-list, since the help states

When interval is 0.0 or None compares system CPU times elapsed since last call or module import, returning immediately. That means the first time this is called it will return a meaningless 0.0 value which you are supposed to ignore.

curiousaboutpi
  • 131
  • 1
  • 9

2 Answers2

5

In order to calculate CPU% you necessarily have to wait. You don't have to wait 0.1 secs for each process/iteration though. Instead, iterate over all processes, call cpu_percent() with interval=0, ignore the return value then wait 0.1 secs (or more). The second time you iterate over all processes cpu_percent(interval=0) will return a meaningful value.

Giampaolo Rodolà
  • 12,488
  • 6
  • 68
  • 60
  • It seems that this solution works like a parallel measurement. This solution reduces time from `process_count * interval` to `interval`. – Daewon Lee Aug 28 '16 at 13:47
1

This is a kind of measuring CPU time of each process, so, you cannot avoid process_count * interval seconds in a single Python interpreter process. Maybe, you could reduce the time by means of multiprocessing.

https://docs.python.org/3/library/multiprocessing.html

Daewon Lee
  • 620
  • 4
  • 6