4

When answering a recent question I repeated my assumption that one reason for using @staticmethod was to save ram, since a static method was only ever instantised once. This assertion can be found fairly easily online, (e.g. here) and I don't know where I first encountered it.

My reasoning was based on two assumptions, one false: a. that python instantised all methods when instantising a class (which is not the case, as a little thought would have shown, oops) and b. that staticmethods were not instantised on access, but just called directly. Thus I thought that this code:

import asyncio

class Test:
    async def meth1():
        await asyncio.sleep(10)
        return 78
t1= Test()
t2 = Test()
loop = asyncio.get_event_loop
loop.create_task(t1)
loop.create_task(t2)

def main():
    for _ in range(10):
        await asyncio.sleep(2)

loop.run(main())

would use more ram than if I defined the class like this:

class Test:
    @staticmethod
    async def meth1():
        await asyncio.sleep(10)
        return 78

Is this the case? Do staticmethods get instantised on access? Do classmethods get instantised on access? I know that t1.meth1 is t2.meth1 will return True in the second case and False in the first, but is that because python is instantising meth1 the first time and then looking it up the second, or because in both cases it merely looks it up, or because in both cases it gets a copy of the static method which is somehow the same (I presume not that?) The id of a staticmethod appears not to change: but I'm not sure what my access to it is doing.

Is there any real world reason to care if so? I've seen an abundance of staticmethods in micropython code where multiple instances exist in asynchronous code at once. I assumed this was for ram saving, but I suspect I'm wrong. I'd be interested to know if there is any difference between the micropython and Cpython implementations here.

Edit I am correct in thinking that the calling t1.meth1() and t2.meth1() will bind the method twice in the first instance and once in the second?

2e0byo
  • 5,305
  • 1
  • 6
  • 26
  • 1
    `Is there any real world reason to care if so?` No. And if anything, `@staticmethod` would cost _more_ RAM, because it causes the original function to be wrapped in another one. – Thomas Oct 14 '21 at 18:22
  • 1
    This talk about "instantiation" is a red herring. Methods are not instantiated – at most they are *bound* but the entire point is that this happens on-demand, so it's a *compute* cost, not a memory cost. If instead of testing things like ``t1.meth1 is t2.meth1`` you would just look at ``t1.meth1`` you would see that it is just the function – there is no staticmethod "instantiated" or bound. – MisterMiyagi Oct 14 '21 at 18:25
  • @Thomas won't it only do that *once* though? I do realise with the ram on standard computers this would be a micro-optimisation, but I was thinking of having e.g. 40 parallel methods running for a web server on a tiny device, where it could conceivably matter – 2e0byo Oct 14 '21 at 18:25
  • One of the most important rules of programming is that premature optimization is the root of all evil. You should write your code in the manner that is most clear to you and to everyone who is going to have to read your code after you. If, once the code is done running, you discover that this is a hot point of your code, then by all means change it from an instance method to a static method. But only then. – Frank Yellin Oct 14 '21 at 18:29
  • It will only do it once. So that's why there's no reason to care: it's just a handful of bytes extra. And if a handful of bytes matters on your target platform, you're probably not using Python in the first place :) (Though I wouldn't know about MicroPython.) – Thomas Oct 14 '21 at 18:30
  • 1
    @FrankYellin indeed, and I stress that I *don't* use `staticmethods` (or classmethods) to save ram, but only when I don't want `self`. I've just seen this so often in *micro*python I assumed (based on the assertions around the 'net) that it was for ram saving and had some noticeable effect. But I've never profiled it so I would be guilty if I did it :) – 2e0byo Oct 14 '21 at 18:30
  • @Thomas I've updated the question as I think I might be wrong even about binding – 2e0byo Oct 14 '21 at 18:33
  • You might find [this HowTo guide on descriptors](https://docs.python.org/3/howto/descriptor.html#descriptor-howto-guide) helpful. It includes a section on staticmethods (but I'd recommend reading the whole thing) – Alex Waygood Oct 14 '21 at 18:44
  • ^also, that article you link to it quite awful! – Alex Waygood Oct 14 '21 at 18:46
  • 1
    @AlexWaygood thank you, I shall read that properly. The linked article is certainly poor, but I guess I've seen the claim so many times I got used it. It seems quite clearly wrong though – 2e0byo Oct 14 '21 at 18:47

1 Answers1

8

Methods do not get "instantiated", they get bound – that is a fancy word for "their self/cls parameter is filled", similar to partial parameter binding. The entire point of staticmethod is that there is no self/cls parameter and thus no binding is needed.

In fact, fetching a staticmethod does nothing at all - it just returns the function unchanged:

>>> class Test:
...     @staticmethod
...     async def meth1():
...         await asyncio.sleep(10)
...         return 78
...
>>> Test.meth1
<function __main__.Test.meth1()>

Since methods are bound on-demand, they don't usually exist in their bound form. As such, there is no memory cost to pay for just having methods and nothing for staticmethod to recoup. Since staticmethod is an actual layer during lookup¹ – even if it does nothing – there is no performance gain either way from (not) using staticmethod.

In [40]: class Test:
    ...:     @staticmethod
    ...:     def s_method():
    ...:         pass
    ...:     def i_method(self):
    ...:         pass
    ...: 

In [41]: %timeit Test.s_method
42.1 ns ± 0.576 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

In [42]: %timeit Test.i_method
40.9 ns ± 0.202 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

Note that these timings may vary slightly depending on the implementation and test setup. The takeaway is that both approaches are comparably fast and performance is not relevant to choose one over the other.


¹staticmethod works as descriptor that runs everytime the method is looked up.

MisterMiyagi
  • 44,374
  • 10
  • 104
  • 119
  • So there is a potential (if trivial) compute gain, but no ram gain? And thank you. – 2e0byo Oct 14 '21 at 18:29
  • thank you: that makes sense, and the profiling clinches it. I think I'm operating under *another* misapprehension, so I've edited the question to add one last question. Is there any chance you could add that to your answer as well? – 2e0byo Oct 14 '21 at 18:34
  • 1
    @2e0byo Note that my timings shown here are for CPython. *In principle* an implementation could compile away the immediate layer – I would bet that PyPy can do so and perhaps micropython has some facilities to that effect. It won't be significant for a large application by itself, though. – MisterMiyagi Oct 14 '21 at 18:38
  • 2
    @2e0byo ``staticmethod`` is *never* bound. What happens is that its descriptor ``__get__`` is invoked – this happens for both regular methods and ``staticmethod``s on *every* access. – MisterMiyagi Oct 14 '21 at 18:42
  • 3
    Note that in my env I consistently get inverse timeit results for both micropython and python, with staticmethods being sliightly quicker, both when using `%timeit` in ipython and when running `timeit.timeit` directly. I'm going to take away that it doesn't matter in the end, and stop worrying about it – 2e0byo Oct 14 '21 at 19:05