2

I am running an executable using the subprocess.Popen API.

I have also set a timeout since the executable could hang or take too long in the communicate call. The problem is that on a heavy loaded server it seems like the run times out often. I think it is because it does not get enough CPU time as I run multiple processes in parallel.

I could increase the timeout, however this is a slippery slope as the machine might become even more loaded (it's used as a Jenkins server).

process = subprocess.Popen(cmd, stdout=log, stderr=log, ...)
process.communicate(timeout=timeout)

Is there a way I can refactor this to instead measure CPU time given to the process and timeout based on that?

I've seen questions suggesting timeit.default_timer(). However I am not sure that'll work for me.

Henrik
  • 1,983
  • 3
  • 28
  • 52
  • 2
    in that case don't use communicate, just wait for the process to complete with a poll loop, and in that loop you can monitor the CPU using `psutil` module and decide according to the CPU taken – Jean-François Fabre Sep 08 '20 at 08:50
  • How much load do you realistically expect on the machine? Is your machine is so overloaded that runtime is significantly affected, compared to safety margins of timeouts? Say *all* programs are slower by factor 3 or more due to load, then a single program taking too long appears to be the least of your problems. Your machine is massively undersized in this case. – MisterMiyagi Sep 08 '20 at 08:53
  • Does this answer your question? [Python - get process names,CPU,Mem Usage and Peak Mem Usage in windows](https://stackoverflow.com/questions/16326529/python-get-process-names-cpu-mem-usage-and-peak-mem-usage-in-windows) – x00 Sep 10 '20 at 07:40

0 Answers0