When building webhooks, it's a best practice for the consumer of the webhook (e.g. the receiver of the webhook) to immediately drop any messages received into a queue to prevent it from "backing up" the delivery of subsequent messages. This seems to be the "best practice" for years regarding webhook architectures. Nowadays, with the advent of internet-accessible queues (e.g. Amazon SQS), why are we not flipping the script on webhook architecture such that the consumer becomes responsible to "pull" messages off a queue, rather than receive message via an http post?
Yes, this essentially is no longer a "webhook", but the concept is the same. The problem being solved here is that a consumer wants to be made "aware" of events happening in another system. Why not have the publisher of these events store all relevant events in a queue dedicated to a single consumer such that the consumer can pull those messages off the queue at their own leisure, pace, etc. I see many benefits to this, mainly the transfer of responsibility to the consumer to "dequeue" messages according to their own abilities. The publisher drops messages as quickly as they can into the queue, and the consumer pulls them off as quickly as they can. If the consumer goes down for any reason, the messages will still remain in the queue for as long as they need. Once the consumer is back up, they can continue pulling message off. No messages ever lost in this scenario. Right?