I use Python's lru_cache
on a function which returns a mutable object, like so:
import functools
@functools.lru_cache()
def f():
x = [0, 1, 2] # Stand-in for some long computation
return x
If I call this function, mutate the result and call it again, I do not obtain a "fresh", unmutated object:
a = f()
a.append(3)
b = f()
print(a) # [0, 1, 2, 3]
print(b) # [0, 1, 2, 3]
I get why this happens, but it's not what I want. A fix would be to leave the caller in charge of using list.copy
:
a = f().copy()
a.append(3)
b = f().copy()
print(a) # [0, 1, 2, 3]
print(b) # [0, 1, 2]
However I would like to fix this inside f
. A pretty solution would be something like
@functools.lru_cache(copy=True)
def f():
...
though no copy
argument is actually taken by functools.lru_cache
.
Any suggestion as to how to best implement this behavior?
Edit
Based on the answer from holdenweb, this is my final implementation. It behaves exactly like the builtin functools.lru_cache
by default, and extends it with the copying behavior when copy=True
is supplied.
import functools
from copy import deepcopy
def lru_cache(maxsize=128, typed=False, copy=False):
if not copy:
return functools.lru_cache(maxsize, typed)
def decorator(f):
cached_func = functools.lru_cache(maxsize, typed)(f)
@functools.wraps(f)
def wrapper(*args, **kwargs):
return deepcopy(cached_func(*args, **kwargs))
return wrapper
return decorator
# Tests below
@lru_cache()
def f():
x = [0, 1, 2] # Stand-in for some long computation
return x
a = f()
a.append(3)
b = f()
print(a) # [0, 1, 2, 3]
print(b) # [0, 1, 2, 3]
@lru_cache(copy=True)
def f():
x = [0, 1, 2] # Stand-in for some long computation
return x
a = f()
a.append(3)
b = f()
print(a) # [0, 1, 2, 3]
print(b) # [0, 1, 2]