I was just reviewing some code that created a C extension module for Python which didn't contain enough error-checking. It was easy enough in most cases but when it comes to the module-init function I wasn't sure.
Just for the sake of discussion, let's take the (abriged) module-init function for itertools
(yes, the one shipped by CPython):
m = PyModule_Create(&itertoolsmodule);
if (m == NULL)
return NULL;
for (i=0 ; typelist[i] != NULL ; i++) {
if (PyType_Ready(typelist[i]) < 0)
return NULL;
name = strchr(typelist[i]->tp_name, '.');
assert (name != NULL);
Py_INCREF(typelist[i]);
PyModule_AddObject(m, name+1, (PyObject *)typelist[i]);
}
return m;
It does check if PyModule_Create
fails (which is good), then it checks if PyType_Ready
fails (which is good) but it doesn't Py_DECREF(m)
in that case (which is suprising/confusing) but it totally fails to check if PyModule_AddObject
fails. According to it's documentation it can fail:
Add an object to module as name. This is a convenience function which can be used from the module’s initialization function. This steals a reference to value. Return -1 on error, 0 on success.
ok, maybe it seemed like overkill to break the module initialization just in case a type couldn't be added. But even in case they didn't want to abort creating the module completely: it should leak a reference for typelist[i]
, correct?
Lots of built-in CPython C modules don't do thorough error checking and handling in the module-init function (that's probably why the C extension I'm fixing doesn't have them either) and they are typically very strict with these kind of issues and potential leaks. So my question is basically: Are the error checks important in the module-init function especially when it comes to PyModule_Add*
functions (like PyModule_AddObject
)? Or can they be omitted like CPython does in many places?