I am encountering some very strange behavior with the following sort of celery workflow:
workflow = group(
chain(task1.s(), task2.s()),
chain(task3.s(), task4.s()),
)
This is in the context of django.
When I call the workflow as follows:
workflow.apply_async((n,))
...for any integer value of n, the first task in each chain (task1
and task3
) will fail with a TypeError like the following (taken from celery events
):
args: [9, 8, 7, 5, 4, 3]
kwargs: {}
retries: 0
exception: TypeError('task1() takes exactly 1 argument (6 given)',)
state: FAILURE
The arguments after the first are always arguments that the workflow was previously called with. So, in this example, I have called workflow.apply_async((9,))
, on this occasion, and the other numbers are values that were passed on previous occasions. On each occasion, the erronious arguments passed to task1
and task3
will be the same.
I'm tempted to post this as a bug report to celery, but I'm not yet certain the mistake isn't mine in some way.
Things I have ruled out:
- I'm definitely passing the arguments I think I am passing to
workflow.apply_async
. I have separately constructed and logged the tuple that I pass, to make sure of this. - It isn't anything to do with passing a list (i.e. mutable) to
apply_async
rather than a tuple. I am definitely passing a tuple (i.e. immutable).
The only moderately unusual thing about my setup, although I can't see how it's connected, it that task1
and task3
are configured with different queues.