I have been trying to move son long-running processes from one application to another. The queues are implemented using qpid proton (AMQP 1.0), with the broker being hosted in AWS. An application sends a single message with a payload including a number of object ids that will be processed (in this case, inserted into a postgres database). An unpacker
queue takes those ids, and in turn, sends a single message to a saver
queue, that is listening to messages and stores the objects one by one. unpacker
wraps those individual messages in a transaction, like so (loads of code removed for brevity):
def send(self, transaction):
# Get payload
payload = transaction.packed_payload
# Unpack messages
unpacked_messages = self.unpack_data(payload)
for message in unpacked_messages:
proton_message = Message(body=message)
transaction.send(self.sender, proton_message)
transaction.commit()
And saver
, like this:
def on_message(self, event):
message_body = json.loads(event.message.body)
# data saving logic here
This happens to work really well if there are 1000 or fewer objects in the packed transaction. However, for higher volumes (and that's certainly a production case) the transaction will apparently succeed on commit, but only 1000 messages will be processed, and the following error will appear:
ERROR:proton:Could not process AMQP commands
I have tried to bump max_prefetch
and constantPendingMessageLimitStrategy
to no avail, as it consistently shuts down connection on the 1000th message, on AWS and locally.
Am I missing something regarding ActiveMQ and its queue configurations? What could explain this behavior?