This is perhaps a too general question, still it bothers me. In general, what's a better practice (and why): proxying or subclassing a base class? By base I mean one of the standard classes, which usually not even implemented in Python.
A more concrete question is as follows: I wish to create an object for graph edges; a graph edge is essentially a pair of two vertices. A vertex can be any hashable object. I wish that an edge will expose many more methods than a frozenset
exposes. So I wonder whether I should
class Edge(frozenset):
def my_method(self, *args):
return 3
or
class Edge(object):
def __init__(self, *args):
self._frozenset = frozenset(*args)
def __len__(self):
return len(self._frozenset)
def ...
The benefits of the first method is that I have to write less (since I don't have to duplicate all the original methods). The second method looks, however, safer in some sense. It is also more flexible, as it allows me to avoid some methods which frozenset
exposes (such as difference
), if I wish.
The first method also introduces an issue with the number of arguments passed. I probably have to overwrite frozenset.__new__
, if I wish to control that. On the other hand, the first method will probably be faster in general, since the proxying creates some overhead.
I don't know if it matters, but I usually write for Python 2.7.