With boost python, I was adding an attribute to my python wrapper where the value came from an enumerated type, for instance:
scope().attr("myconstant")=some_namespace::some_class::some_enum_value;
But I got a run time error when I imported my python module:
terminate called after throwing an instance of 'boost::python::error_already_set'
Following other threads, I put the above in a try/catch but didn't catch anything to call PyErr_Fetch on. I'm still curious where the original python error occurred.
It turns out that I have to do
scope().attr("myconstant")=int(some_namespace::some_class::some_enum_value);
And then it runs.
Another, but I think related problem, is if you export a C++ function in your python wrapper that returns a C++ enum, but you do not export that enum, all is fine until you call this function from python. Then boost generates a python exception about a type not being found.
So clearly boost is doing some things at runtime that seem (to me) like they should be done at compile time. Both these problems were time consuming to diagnose. Does anyone know what is going on? With more things happening at runtime then I'd expect, will I hit performance issues with boost python that I wouldn't get if I worked directly with the python extension API? In addition to performance, I'm concerned that boost python code has more errors that won't be found until runtime then direct python extension code would have.
On the flip side, is there some big gain to be had with all this dynamic type binding? Clearly there is the nice boost interface for writing my own python extension - but does all this dynamic binding make it easier to add new boost python wrappers into an existing system then wrappers written directly with the python extension API?