0

I read about APUE 3rd, 8.16 Process Scheduling, there is an example written to verify that changing nice value of a process will affect its priority, I rewrite the code like below:

#include <errno.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <unistd.h>

long long count;
struct timeval end;
static void check_time(const char* str);

int main(int argc, char* argv[])
{
    pid_t pid;
    char* s;
    int nzero, ret;
    int adj = 0;
    setbuf(stdout, NULL);
#if defined(NZERO)
    nzero = NZERO;
#elif defined(_SC_NZERO)
    nzero = sysconf(_SC_NZERO);
#else
#error NZERO undefined
#endif
    printf("NZERO = %d\n", nzero);
    if (argc == 2)
        adj = strtol(argv[1], NULL, 10);
    gettimeofday(&end, NULL);
    end.tv_sec += 10;
    if ((pid = fork()) < 0) {
        perror("fork error");
        return -1;
    } else if (pid == 0) {
        s = "child";
        printf("child nice:%d, adjusted by %d\n", nice(0) + nzero, adj);
        errno = 0;
        if ((ret = nice(adj)) == -1 && errno != 0) {
            perror("nice error");
            return -1;
        }
        printf("child now nice:%d\n", ret + nzero);
    } else {
        s = "parent";
        printf("parent nice:%d\n", nice(0) + nzero);
    }
    while (1) {
        if (++count == 0) {
            printf("count overflow\n");
            return -1;
        }
        check_time(s);
    }
    return 0;
}

static void check_time(const char* str)
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    if (tv.tv_sec >= end.tv_sec && tv.tv_usec >= end.tv_usec) {
        printf("%s count:%lld\n", str, count);
        exit(0);
    }
}

And the result of the example is shown below:
NZERO = 20
parent nice:20
child nice:20, adjusted by 0
child now nice:20
parent count:601089419
child count:603271014
Looks like no effect has been made on the child process, why? And how to make the result the way I expect?
(my platform is: Linux liucong-dell 4.4.0-93-generic #116~14.04.1-Ubuntu SMP Mon Aug 14 16:07:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)

cong
  • 1,105
  • 1
  • 12
  • 29
  • Any body know the answer? – cong Sep 07 '17 at 02:33
  • @Sam Protsenko, you know the answer? Or you know who may know the answer? – cong Sep 07 '17 at 12:04
  • 1. Please read [man 2 nice](https://linux.die.net/man/2/nice). 2. Please provide [minimal example](http://sscce.org/), showing the problem (it should reproduce the problem, but must be much more smaller than one you provided). Frankly, I'm too lazy to debug your code (most likely the problem was introduced in it), but if you provide minimal working example -- I'll look into it. – Sam Protsenko Sep 07 '17 at 12:51
  • @SamProtsenko the example provided is pretty much minimal and [reproduces the problem](https://ideone.com/yI0SIC). If you think it can be substantially reduced, point out parts that can be removed. – n. m. could be an AI Sep 07 '17 at 13:05
  • Please don't post pictures of text, post the text itself. – n. m. could be an AI Sep 07 '17 at 13:08
  • The question is possible duplicate. See next topics: [1](https://stackoverflow.com/questions/10342470/process-niceness-priority-setting-has-no-effect-on-linux), [2](https://serverfault.com/questions/405092/nice-level-not-working-on-linux), [3](https://unix.stackexchange.com/questions/339689/how-to-tell-whether-the-nice-command-is-working) – Sam Protsenko Sep 07 '17 at 13:25
  • @n.m., thanks for your advice, I have made a change. – cong Sep 07 '17 at 13:27

1 Answers1

1

Your multi-core CPU can happily run both the parent and the child simultaneously. As long as both can run, their relative priorities don't matter.

In order to see the effect of nice, you have to load your machine such that there's always a process ready and waiting to run. The easiest way would be making your test multithreaded. Spawn (after the fork) a few worker threads in both the parend and the child, make them all increment the counter (make it either atomic or thread local), and see what happens.

n. m. could be an AI
  • 112,515
  • 14
  • 128
  • 243
  • I have modified my code to spawn a few worker thread in both parent and child, more strange things happened, I have posted an answer, could you please help me to check this out? – cong Sep 08 '17 at 03:27
  • @cong The mutex effectively serializes access to the counter, and mutex overhead is large, so having threads doesn't make much sense and introduces a lot of noise. Use _Atomic of c11, or (perhaps better) use an independent counter for each thread and add them up in the end. Also, join the threads in the end. – n. m. could be an AI Sep 08 '17 at 04:03
  • I have modified my code again, the result is what I have expected. But why would it result in unreasonable result if I use mutex, do you have any idea of this? – cong Sep 08 '17 at 05:04
  • When you use a mutex, one thread is running and all the rest are waiting, so you still have a mostly idle system. Look at the output of `top`, your goal is to have ncores\*100% CPU use. – n. m. could be an AI Sep 08 '17 at 06:05
  • But I can draw a conclusion just from the result of executing my code that reducing the priority by increasing the nice value of child process will actually increase the priority of child, not the other way around, that's the strange point. Event if the mutex will cause other threads waiting, but this won't account for the "201311332" is much bigger than "66907596", right? – cong Sep 08 '17 at 06:34
  • @cong Priorities matter **only** when you have more ready-to-run processes than free processors/cores. If you have idle processors, you cannot explain your result in terms of priorities, they are simply irrelevant. – n. m. could be an AI Sep 08 '17 at 06:53
  • It looks like the higher priority process switches between threads a lot more ans so spends a lot more time in "sys" than in "user" mode (see `times(2)`. Why is this so I don't know. – n. m. could be an AI Sep 08 '17 at 07:33