0

I have obtained the following results from executing a script:

real 0m1.027s
user 0m1.752s

I understand that:

  • Real time is the wall time. This is the time I would have obtained if I have measured with a stopwatch from the start until the end of the execution.
  • User time is the time the CPU was executing exclusively the code of the script (this time does not included kernel system calls for example).

How come it is possible to have user time > real time?

Hauke Laging
  • 5,285
  • 2
  • 24
  • 40

2 Answers2

1

That is possible with a multithreaded application if the threads run in parallel. If two threads are running all the time then the process consumes two CPU seconds in each second wall / real time.

Hauke Laging
  • 5,285
  • 2
  • 24
  • 40
  • I like the clarification of "consumes two CPU seconds in each second wall time". When two or more threads run in parallel, I was not sure whether the seconds of each independent CPU should be added to obtain the "total CPU time". Thanks! – franncapurro Jul 19 '20 at 19:01
1

Yes. The user time is the sum of the time spent to execute each thread of your process. If multiple threads are executed simultaneously then the user time can be larger than the wallclock (real) time.

Steven
  • 111
  • 4