8

I'm working on some code that generates a lot of

ignoring return value of ‘size_t fwrite(const void*, size_t, size_t, FILE*)’, declared with attribute warn_unused_result

warnings when compiled with g++, and I'm wondering about the best programming pattern to actually record and handle the return value of a large number of separate sequential fwrites (i.e. not the same fwrite in a loop)

Let's say that the code looks like this at the moment:

fwrite (&blah, sizeof (blah), 1, fp);
// ... more code ...
fwrite (&foo, sizeof (foo), 1, fp);
// ... more code ...

I'm currently thinking about something like this, but I may have difficulty cleaning up the file pointer:

if (fwrite (&blah, sizeof (blah), 1, fp) != 1) return someerrorcode;
// ... more code ...
if (fwrite (&foo, sizeof (foo), 1, fp) != 1) return someerrorcode;
// ... more code ...

I think that approach is clearly better than nesting, which would get too crazy too quick:

if (fwrite (&blah, sizeof (blah), 1, fp) == 1) {
   // ... more code ...
   if (fwrite (&foo, sizeof (foo), 1, fp) == 1) {;
      // ... more code ...
   }
}

Surely there is already an established best-practice pattern for this sort of thing, though?

Of course, as I am mainly looking into this to get rid of the compiler warnings, I could just assign the return value to a dummy variable and ignore it, but I'd like to try doing it the right way first.

dummy = fwrite (&blah, sizeof (blah), 1, fp);
// ... more code ...
dummy = fwrite (&foo, sizeof (foo), 1, fp);
// ... more code ...

Update: I've removed the c++ tag as this code is really just c being compiled using g++, so c based solutions are needed to keep with the rest of the code base.

David Dean
  • 7,435
  • 6
  • 33
  • 41

13 Answers13

10

The poor man's C exception handling based on goto (in fact, the one and only instance of goto NOT being harmful):

int foo() {
    FILE * fp = fopen(...);
    ....

    /* Note: fwrite returns the number of elements written, not bytes! */
    if (fwrite (&blah, sizeof (blah), 1, fp) != 1) goto error1;

    ...

    if (fwrite (&foo, sizeof (foo), 1, fp) != 1) goto error2;

    ...

ok:
    /* Everything went fine */
    fclose(fp);
    return 0;

error1:
    /* Error case 1 */
    fclose(fp);
    return -1;

error2:
    /* Error case 2 */
    fclose(fp);
    return -2;
}

You get the idea. Restructure as you wish (single/multiple returns, single cleanup, custom error messages, etc.). From my experience this is the most common C error handling pattern out there. The crucial point is: NEVER, EVER ignore stdlib return codes, and any good reason to do so (e.g. readability) is not good enough.

fbonnet
  • 2,325
  • 14
  • 23
10

I'd do something along these lines:

FILE * file = fopen("foo", "wb");
if(!file) return FAILURE;

// assume failure by default
_Bool success = 0;

do
{
    if(!fwrite(&bar, sizeof(bar), 1, file))
        break;

    // [...]

    if(!fwrite(&baz, sizeof(baz), 1, file))
        break;

    // [...]

    success = 1;
} while(0);

fclose(file);

return success ? SUCCESS : FAILURE;

With a little C99 macro magic

#define with(SUBJECT, FINALIZE, ...) do { \
    if(SUBJECT) do { __VA_ARGS__ } while(0); if(SUBJECT) FINALIZE; \
} while(0)

and using ferror() instead of our own error flag as suggested by Jonathan Leffler, this can be written as

FILE * file = fopen("foo", "wb");
with(file, fclose(file),
{
    if(!fwrite(&bar, sizeof(bar), 1, file))
        break;

    // [...]

    if(!fwrite(&baz, sizeof(baz), 1, file))
        break;

    // [...]
});

return file && !ferror(file) ? SUCCESS : FAILURE;

If there are other error conditions aside from io errors, you'll still have to track them with one or more error variables, though.

Also, your check against sizeof(blah) is wrong: fwrite() returns the count of objects written!

Christoph
  • 164,997
  • 36
  • 182
  • 240
  • 1
    Note that ferror() tells you whether there was an error on the stream. – Jonathan Leffler Feb 20 '09 at 15:28
  • One annoying problem with this macro-based approach is that compiler errors won't report correct line numbers for code inside the `with` statement—depending on the compiler, it will report either the line containing the `with` keyword or the line with the closing parenthesis of the macro invocation for any errors. – Adam Rosenfield Jun 10 '11 at 05:20
2

You could write a wrapper function

void new_fwrite(a, b, c, d) {
    if (fwrite (a, b, c, b) != b) 
       throw exception;
}

and then replace all calls to fwrite with new_fwrite

Patrick McDonald
  • 64,141
  • 14
  • 108
  • 120
2

Ignoring errors is a bad idea. It's much better to do something nasty like crash the program so that at least you know something went wrong, rather than silently proceeding. Even better is nice error checking and recovery.

If you're using C++, you can create an RAII wrapper for the FILE* so that it'll always get closed. Look at std::auto_ptr for ideas. You can then return a useful error code or from the function whenever you wish, or throw an exception and not have to worry about forgotten cleanup items.

Mr Fooz
  • 109,094
  • 6
  • 73
  • 101
0

You can remove the warnings like this:

(void) fwrite ( ,,,, );

Addressing your main question, if any of the fwrite() calls fail, I'd guess that it doesn't make sense to continue, as the output is presumably corrupt. In that case, and as you tagged this C++, I'd throw an exception.

0

Nesting is bad, and multiple returns are not good either.

I used to use following pattern:

#define SUCCESS (0)
#define FAIL    (-1)
int ret = SUCCESS;

if (!fwrite(...))
    ret = FAIL;
if (SUCCESS == ret) {
    do_something;
    do_something_more;
    if (!fwrite(...))
        ret = FAIL;
}
if (SUCCESS == ret)
    do_something;

return ret;

I know it looks ugly but it has single return point, no excessive nesting and very easy to maintain.

qrdl
  • 34,062
  • 14
  • 56
  • 86
  • “Single return point” is a _very_ subjective rule to code by. It should not be mentioned here. – Bombe Feb 20 '09 at 13:46
  • I agree with Bombe. Stating the multiple returns aren't good as a general fact isn't kosher. – Greg D Feb 20 '09 at 13:48
  • 1
    Also, the fwrite return could be non-zero and still indicate failure, you need to check it against the size of the data to be written – David Dean Feb 20 '09 at 13:49
  • Single return point allows you to make a cleanup just once. I know cleanup isn't always needed but it makes sense to develop a good habit. It pays off. – qrdl Feb 20 '09 at 13:51
  • @David Dean - it is just an example to illustrate the concept. It is very easy to make proper error handling this way. – qrdl Feb 20 '09 at 13:53
  • @David Dean: if you always write 1 item of the size specified, fwrite() returns either 0 or 1. – Jonathan Leffler Feb 20 '09 at 15:31
0

Well ... You could create a wrapper function, that re-tries the write if it fails, perhaps up to some maximum number of retries, and returns success/failure:

int safe_fwrite(FILE *file, const void *data, size_t nbytes, unsigned int retries);
void print_and_exit(const char *message);

Then your main code could be written as

#define RETRIES 5
if(!safe_fwrite(fp, &blah, sizeof blah, RETRIES))
  print_and_exit("Blah writing failed, aborting");
if(!safe_fwrite(fp, &foo, sizeof foo, RETRIES))
  print_and_exit("Foo writing failed, aborting");
unwind
  • 391,730
  • 64
  • 469
  • 606
0

Your first solution looks ok. Usually a goto err; comes in more handy as you may need some common cleanup part (such as rewinding to a known position).

To make GCC quiet just do:

(void)fwrite (&blah, sizeof (blah), 1, fp);
(void)fwrite (&foo, sizeof (foo), 1, fp);
kmkaplan
  • 18,655
  • 4
  • 51
  • 65
0

Something like this would work

if (fwrite (&blah, sizeof (blah), 1, fp) != 1) throw SomeException;

If you're worried about pointers being cleaned up you can wrap the pointer in some form of smart pointer before carrying out your fwrite's.

If you don't want to use smart pointers then this will work, but it's messy, so I'd try the smart pointer route first

if (fwrite (&blah, sizeof (blah), 1, fp) != 1) {
    //cleanup here
    throw SomeException;
}
Glen
  • 21,816
  • 3
  • 61
  • 76
0

Why don’t you wrap the fwrite into a Writer object of some kind and throw an exception if fwrite() returns an error code? Easy to code, easy to use, easy to manage. IMHO, of course. :)

Bombe
  • 81,643
  • 20
  • 123
  • 127
0

Maybe something like this? You catch the errors without making the code too unreadable, and you can do cleanup after the end of the faux loop.

#define WRITE_ERROR 100
#define WRITE_OK 0

int do_fwrite(void* ptr, size_t bytes, int fp) {
  if ( fwrite(ptr, bytes, 1, fp) != bytes ) return WRITE_ERROR;
  return WRITE_OK;
}

int my_func() {
   int errcode = 0;

   ...
   do {
     if ( errcode = do_fwrite(&blah, sizeof(blah), fp) ) break;
     ....
     if ( errcode = do_fwrite(&foo, sizeof(foo), fp) ) break;
     ....
     etc
   } while( false );

   fclose(fp);
   return errcode;
}
Eric Petroelje
  • 59,820
  • 9
  • 127
  • 177
0

Ok, given that I'm looking for a c solution (no exceptions), how about:

void safe_fwrite(data,size,count,fp) {
   if (fwrite(data,size,count,fp) != count) {
      printf("[ERROR] fwrite failed!\n");
      fclose(fp);
      exit(4);
   }
}

And then in my code I have:

safe_fwrite (&blah, sizeof (blah), 1, fp);
// ... more code ...
safe_fwrite (&foo, sizeof (foo), 1, fp);
// ... more code ...
Thomas Eding
  • 35,312
  • 13
  • 75
  • 106
David Dean
  • 7,435
  • 6
  • 33
  • 41
  • Exiting on error is a bit hardcore, but if it fits your app's needs, why not. In this case no need to worry about cleanup, though, so this doesn't really answer your original question, and wrapping only shifts the burden. See my answer for a widely used 'best' practice. – fbonnet Feb 20 '09 at 14:09
0

A potentially elegant C solution for this could be something like this (warning - untested, uncompiled code ahead):


  size_t written;
  int    ok = 1;
  size_t num_elements = x;
  ok = (fwrite(stuff, sizeof(data), num_elements, outfile) == num_elements);

  if (ok) {
    ... do other stuff ...
  }

  ok = ok && (fwrite(stuff, sizeof(data), num_elements, outfile) == num_elements);

  if (ok) {
    ... etc etc ad nauseam ...
  }

  fclose(outfile);

  return ok;

The above accomplishes two goals at the same time:

  • Return values are checked, thus eliminating the warning and giving you the ability to return a status code.
  • Thanks to the short circuit evaluation, should one of the fwrite() calls fail, the subsequent ones will not be executed, so at least the file write stops instead of giving you a potentially corrupt file if the error condition disappears halfway through the function and you're able to write data again

Unfortunately the 'ugly' if (ok) blocks are necessary if you don't want to use the short-circuit evaluation everywhere. I've seen this pattern used in comparatively small functions using short circuit evaluation everywhere and I would think that it's probably best suited to that particular use.

Timo Geusch
  • 24,095
  • 5
  • 52
  • 70
  • the only problem is that the .. do other stuff .. is still executed, when the program should really just terminate (as the other stuff is basically just processing in order to do the next fwrite) – David Dean Feb 20 '09 at 14:18
  • You're correct - I've updated the code and put a little caveat in the answer. – Timo Geusch Feb 20 '09 at 14:23
  • You could write "if ((ok = ok && (fwrite(...)))) {" to avoid the 'if' blocks, wrapped around a macro if you want more readable code. – fbonnet Feb 20 '09 at 14:58