0

When building webhooks, it's a best practice for the consumer of the webhook (e.g. the receiver of the webhook) to immediately drop any messages received into a queue to prevent it from "backing up" the delivery of subsequent messages. This seems to be the "best practice" for years regarding webhook architectures. Nowadays, with the advent of internet-accessible queues (e.g. Amazon SQS), why are we not flipping the script on webhook architecture such that the consumer becomes responsible to "pull" messages off a queue, rather than receive message via an http post?

Yes, this essentially is no longer a "webhook", but the concept is the same. The problem being solved here is that a consumer wants to be made "aware" of events happening in another system. Why not have the publisher of these events store all relevant events in a queue dedicated to a single consumer such that the consumer can pull those messages off the queue at their own leisure, pace, etc. I see many benefits to this, mainly the transfer of responsibility to the consumer to "dequeue" messages according to their own abilities. The publisher drops messages as quickly as they can into the queue, and the consumer pulls them off as quickly as they can. If the consumer goes down for any reason, the messages will still remain in the queue for as long as they need. Once the consumer is back up, they can continue pulling message off. No messages ever lost in this scenario. Right?

user1431072
  • 1,272
  • 2
  • 13
  • 32

1 Answers1

3

The way I see it is mostly an opinion, not necessarily the ultimate answer.

While theoretically there's a good point in advocating for pushing messages straight to the queue by the producers, there's a real world constraint that will be imposed on those producers. Every messaging system has some nuances. This means producers have to be aware of those nuances in order to be able to publish to various messaging services. Authentication is another nuance. All this turns into a nightmare for any producer that issues notifications to various consumers. This is what webhooks have solved. Ubiquitous, established protocol, authentication, etc.

Sean Feldman
  • 23,443
  • 7
  • 55
  • 80
  • When you say "pushing messages straight to the queue by the producers", what I'm imagining is that the queue is owned by the producer. That way, the consumers would have to register/authenticate with the producer to use that specific queue. This would avoid leaving the producer to cross internet boundaries to drop messages on someone else's queue. Would that perspective alter the complexity, in your mind? – user1431072 Sep 07 '22 at 15:46
  • I'm not sure it would. Take Azure Service Bus, SQS, or a hosted RMQ; you would be crossing boundaries. I'm also not convinced we can state a queue is owned by the producer. A queue is a shared piece of infrastructure. And the beauty of webhooks is that you don't have shared resources. You invoke an HTTP call. So that's the perspective I'm looking at it from. – Sean Feldman Sep 07 '22 at 18:34