12

I'm writing a small library that takes a FILE * pointer as input.

If I immediately check this FILE * pointer and find it leads to a segfault, is it more correct to handle the signal, set errno, and exit gracefully; or to do nothing and use the caller's installed signal handler, if he has one?

The prevailing wisdom seems to be "libraries should never cause a crash." But my thinking is that, since this particular signal is certainly the caller's fault, then I shouldn't attempt to hide that information from him. He may have his own handler installed to react to the problem in his own way. The same information CAN be retrieved with errno, but the default disposition for SIGSEGV was set for a good reason, and passing the signal up respects this philosophy by either forcing the caller to be handle his errors, or by crashing and protecting him from further damage.

Would you agree with this analysis, or do you see some compelling reason to handle SIGSEGV in this situation?

User123abc
  • 376
  • 3
  • 9
  • 2
    AFAIK standard C library doesn't handle the crash inside, why your lib should ? However I guess it depends on what you're planning to do with your lib. (How does the check on a FILE * leads to a SIGSEV btw? Just curious) – BigMike Jan 23 '12 at 19:53

6 Answers6

7

Taking over handlers is not library business, I'd say it's somewhat offensive of them unless explicitly asked for. To minimize crashes library may validate their input to some certain extent. Beyond that: garbage in — garbage out.

Michael Krelin - hacker
  • 138,757
  • 24
  • 193
  • 173
5

The prevailing wisdom seems to be "libraries should never cause a crash."

I don't know where you got that from - if they pass an invalid pointer, you should crash. Any library will.

Shiplu Mokaddim
  • 56,364
  • 17
  • 141
  • 187
James M
  • 18,506
  • 3
  • 48
  • 56
3

I would consider it reasonable to check for the special case of a NULL pointer. But beyond that, if they pass junk, they violated the function's contract and they get a crash.

Evan Teran
  • 87,561
  • 32
  • 179
  • 238
2

This is a subjective question, and possibly not fit for SO, but I will present my opinion:

Think about it this way: If you have a function that takes a nul-terminated char * string and is documented as such, and the caller passes a string without the nul terminator, should you catch the signal and slap the caller on the wrist? Or should you let it crash and make the bad programmer using your API fix his/her code?

If your code takes a FILE * pointer, and your documentation says "pass any open FILE *", and they pass a closed or invalidated FILE * object, they've broken the contract. Checking for this case would slow down the code of people who properly use your library to accommodate people who don't, whereas letting it crash will keep the code as fast as possible for the people who read the documentation and write good code.

Do you expect someone who passes an invalid FILE * pointer to check for and correctly handle an error? Or are they more likely to blindly carry on, causing another crash later, in which case handling this crash may just disguise the error?

Chris Lutz
  • 73,191
  • 16
  • 130
  • 183
  • 1
    Further there's no reason to expect that it's even possible to catch the case of an invalid `FILE *` being passed. It's entirely dependent on the system's implementation and internal behavior. An invalid `FILE *` could easily seem to "work" yet do something completely bogus, even corrupt your library code and prevent its checks from working (on a system without memory protection). – R.. GitHub STOP HELPING ICE Jan 23 '12 at 20:01
1

Kernels shouldn't crash if you feed them a bad pointer, but libraries probably should. That doesn't mean you should do no error checking; a good program dies immediately in the face of unreasonably bad data. I'd much rather a library call bail with assert(f != NULL) than to just trundle on and eventually dereference the NULL pointer.

Kyle Jones
  • 5,492
  • 1
  • 21
  • 30
0

Sorry, but people who say a library should crash are just being lazy (perhaps in consideration time, as well as development efforts). Libraries are collections of functions. Library code should not "just crash" any more than other functions in your software should "just crash".

Granted, libraries may have some issues around how to pass errors across the API boundary, if multiple languages or (relatively) exotic language features like exceptions would normally be involved, but there's nothing TOO special about that. Really, it's just part of the burden of writing libraries, as opposed to in-application code.

Except where you really can't justify the overhead, every interface between systems should implement sanity checking, or better, design by contract, to prevent security issues, as well as bugs.

There are a number of ways to handle this, What you should probably do, in order of preference, is one of:

  1. Use a language that supports exceptions (or better, design by contract) within libraries, and throw an exception on or allow the contract to fail.

  2. Provide an error handling signal/slot or hook/callback mechanism, and call any registered handlers. Require that, when your library is initialised, at least one error handler is registered.

  3. Support returning some error code in every function that could possibly fail, for any reason. But this is the old, relatively insane way of doing things from C (as opposed to C++) days.

  4. Set some global "an error has occurred flag", and allow clearing that flag before calls. This is also old, and completely insane, mostly because it moves error status maintence burden to the caller, AND is unsafe when it comes to threading.