I am buidling a utility which retrieves information for all the running processes on the OS (Centos 7) using Python 3.6.5.
I created the following function for that matter, using psutil
:
def get_processes(self):
fqdn = self.get_FQDN()
process_infos = list()
for proc in psutil.process_iter():
proc_info = dict()
with proc.oneshot():
proc_info["pid"] = proc.pid
proc_info["ppid"] = proc.ppid()
proc_info["name"] = proc.name()
proc_info["exe"] = proc.exe() # Requires root access for '/proc/#/exe'
proc_info["computer"] = fqdn
proc_info["cpu_percent"] = proc.cpu_percent()
mem_info = proc.memory_info()
proc_info["mem_rss"] = mem_info.rss
proc_info["num_threads"] = proc.num_threads()
proc_info["nice_priority"] = proc.nice()
process_infos.append(proc_info)
return process_infos
I have a one second iteration which calls this function, and after adding it I noticed that my application CPU consumption worsened from ~1% to ~10%.
The profiler indicated to me that most of my CPU time is wasted within the psutil
's function _parse_stat_file
which parses the content of the /proc/<pid>/stat
file.
According to psutils
documentation, it is recommended to use oneshot() function for more efficient collection, but as you can see I already use it.
Is there something I am doing wrong here? Or am I doomed to psutils
bad performance? If so, do you know other utility that might solve my problem more efficiently?