0

here is a test program I wrote

int main( int argc, const char* argv[] )
{
  const char name[1024] = "/dev/shm/test_file";
  off_t len = atol(argv[argc - 1]);
  char buf[1024];
  FILE * f = fopen(name, "w");
  for (int i = 0; i < len; i++) {
    int ret = fwrite(buf, 1024, 1, f);
    if (ret != 1) {
      printf("disk full\n");
    }
  }
  if ( fclose(f) != 0)
    printf("failed to close\n");
  return 0;
}

I tried to fill the /dev/shm to almost full

tmpfs            36G   36G   92K 100% /dev/shm

and ran

$ ./a.out 93
failed to close

my glibc

$ /lib/libc.so.6 
GNU C Library stable release version 2.12, by Roland McGrath et al.

the kernel version is 2.6.32-642.13.1.el6.x86_64

I understand that this behavior is caused by fwrite try to cache the data in memory. (I tried setvbuf(NULL...) and fwrite immediately return failure). But this seems a little different from the definition

The fwrite() function shall return the number of elements successfully written, which may be less than nitems if a write error is encountered. If size or nitems is 0, fwrite() shall return 0 and the state of the stream remains unchanged. Otherwise, if a write error occurs, the error indicator for the stream shall be set, [CX] [Option Start] and errno shall be set to indicate the error. [Option End]

The data was not successfully written to disk however its return value is 1. no errno set. In this test case, the fclose catch the failure. But it could be caught by even a ftell function which is quite confusing.

I am wondering if this happens to all versions of glibc and would this be consider a bug.

HLi
  • 13
  • 4

1 Answers1

0

The data was not successfully written to disk

The standard doesn't talk about the disk. It talks about data being successfully written to the stream (which it has been).

I am wondering if this happens to all versions of glibc

Most likely.

and would this be consider a bug.

It's a bug in your interpretation of the requirements on fwrite.

Employed Russian
  • 199,314
  • 34
  • 295
  • 362