1
$ cd glibc-2.23
$ grep -ErI --include='*.c' '= *f?put([cs]|char)\>' |wc -l
1
$ grep -ErI --include='*.c' '[^= ] *f?put([cs]|char)\>' |wc -l
1764

$ man putc
...
RETURN VALUE
   fputc(), putc() and putchar() return the character written
   as an unsigned char cast to an int or EOF on error.

Perhaps it is "unlikely enough" that for example in a sequence of putchars one or two fails unnoticeably while preceeding and succeeding ones do succeed?
Or, at least of 99.9999% of implementations, they simply may not have "return suchandsuch" statements?
Or, continuous error checking, after every byte (in case of putchar, putc, fputc), may cause too big performance degradation?

  • Related: http://softwareengineering.stackexchange.com/questions/302730/should-one-check-for-every-little-error-in-c – cadaniluk Dec 08 '16 at 15:22
  • I'd say it's a combination of: it's a nuisance, and the errors are extremely unlikely, and "don't check for errors you can't handle". But I don't think it's performance. If you want to check for write errors, and you have a lot of individual `putc` calls, it suffices to check the return error on some of them, you don't have to check all of them. – Steve Summit Dec 08 '16 at 15:31
  • Good question about C, but I see it as too opinion based for a good answer here on Stack-overflow. Unclear what other SE site would be good. – chux - Reinstate Monica Dec 08 '16 at 16:12

2 Answers2

1

Ignoring return values from functions is the source of many hard to find bugs in software. Your statement that "nobody" checks the return values is not correct. High reliability software systems do check the return values.

tddguru
  • 392
  • 1
  • 4
1

These are often ignored for a couple of reasons. First, for most systems, the chances are a problem with them is sufficiently remote that most people just don't care.

Second, because if they do fail, there's often relatively little you can do about it anyway--if the OS has gotten into a state where writing to a file fails, the usual reactions such as displaying a message to the user and/or logging the error may easily fail as well.

Finally, in most cases you can simplify the problem quite a bit: do your I/O, and then only do an error check on whether fclose succeeded. If the file has gotten into a failed state, you can expect fclose to fail, so catching the error earlier (when you did the I/O) is typically only an optimization--you can detect the same problem by only checking fclose, though you might waste some time in futile attempts to write a file after you could have detected a problem.

Even checking the return from fclose is fairly unusual though. You still have the same problems as mentioned above: if system has failed to the point that it fails, there's a fairly decent chance that most of your attempts at reacting to the failure will also fail.

There are still some cases where it makes sense though. For example, consider moving a file from one place to another (e.g., across a network). You want to check that you wrote the data to the destination successfully before you delete the source file. In this case, checking the return from fclose is a fairly easy way to reduce the chances of destroying the user's file, in case the attempt at copying failed.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111