10

My scenario is that I'm planning to create a ServiceBus topic with multiple (unknown) number of subscribers. They can use topic filters, so won't process each message from the topic.

I need for a given message (Id) to wait until all handlers have done their job to continue workflow. Naturally each handler will produce a message upon completion and I can use for example Durable Function to wait for a list of events.

But the question is how can I know the list of subscriptions message has/will been sent to?

With Microsoft.Azure.ServiceBus.Management.ManagementClient.GetSubscriptionsAsync() I can get the list of all subscriptions for my topic. But I cannot find how to evaluate whether it will take a given message or not according to filters.

If that is not possible to achieve with ServiceBus, are there any alternatives (besides reinventing the wheel with custom implementation of Pub/Sub) to implement this kind of scenario?

nobody.price
  • 221
  • 1
  • 14
Sasha
  • 8,537
  • 4
  • 49
  • 76
  • Could you possibly break apart your subscriptions into multiple topics? Can you explain the broader picture of what you're trying to do? (E.g. please explain *why* do you need to do it, not *what* you need to do.) – Slothario Dec 23 '19 at 16:02
  • @Slothario If we break topic into multiple streams: one for each subscriber, we'll break the whole idea that publisher is independent of subscribers. We won't be able to dynamically add new subscribers without modifying publisher's code... – Sasha Dec 23 '19 at 16:15
  • What are you trying to ultimately do? It sounds like you're trying to do something like sharding based on load, and that's an extremely common scenario. I don't doubt that Service Bus does something like that out of the box, and I know something like Kafka could probably handle it. However, it's hard to say unless I know what problem you're trying to solve. https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem – Slothario Dec 23 '19 at 18:53
  • @Slothario, yes, sorry, I didn't address your comment originally well. The idea is extensibility - to be able to add multiple 'plugins' to the core functionality without modifying the main logic. For example, when customer does a purchase, multiple loyalty benefits can be loaded for him based on various criteria. When all benefits are calculated, we need to trigger fraud detection (just in case someone is too lucky) and only then award those benefits to customer. The list of applicable benefits can be different based on payment amount, kind, location etc (filters apply for optimization). – Sasha Dec 24 '19 at 08:55
  • Thanks. Follow up questions: roughly how many calls per second will your service be getting? What is the main motivations for breaking your service out into multiple Azure functions? And if I understand it correctly, is the workflow like this: 1) User makes purchase, 2) purchase is processed by durable function, 3) 0-to-n consumers process it simultaneously and report back? – Slothario Dec 24 '19 at 14:27
  • Motivation of break-down is because we could end up with dozens complex handlers there - each apply in it's own scenario and main thing is: can be developed by a different team (even different vendor) as a separate project. Now we have everything in one place and that doesn't work well. How many calls - hard to say, but we should be ready for ~1000 call per second at peak hours. Your understanding is close to reality except that purchase isn't processed by durable functions. I was planning to implement fan-out-fan-in with durables if I can't control fan-out with ServiceBus - that's the idea. – Sasha Dec 24 '19 at 17:06
  • One thing to consider: instead of breaking apart things into multiple microservices, create a very solid interface that will define what each service will take as inputs and outputs. Each team can be allowed to program their processing class to that interface however they want. Then, you can deploy the classes that implement that interface as a single microservice, and use the "strategy pattern" at runtime to determine what classes will process the transaction. Would that work? – Slothario Dec 25 '19 at 18:10
  • @Slothario, that could work, but I'm afraid that's reinventing the wheel. – Sasha Dec 29 '19 at 19:34
  • 1
    I get that. But what I've learned about microservices is that they're very easy to implement, but become very complicated fast because distributed computing is harder than it seems. Microservices are a strategy to reduce lines of communication within an organization, not to organize code. To organize code, use good design patterns. I strongly recommend doing some reading to convince yourself that's true. However, if you don't have the clout in your organization to make the change, I understand that as well. – Slothario Dec 30 '19 at 16:06
  • 1
    I would have one core service which listens for events and coordinate "plugins" calls via rest to process. Also part of that each of the plugins would need to "register"(You can easily do that as part of azure devops) in core to say I can process event of some type. In this case you will have flexibility of micro-services so you can deploy each chunk separately but still you will be able to control who and when process your event. Even more you can now order event processing, you can also break execution if needed so you dont call others processors, you can add rules and so on. – Vova Bilyachat Dec 30 '19 at 22:13

1 Answers1

0

I would start by removing the ability to filter.

Create serveral topics (not a topic per subscriber) that are an approximation of the filters.

Every subscriber that subscribes to a topic must process all messages for that topic. Even if it is so say that they did nothing with it.

Then you know who has subscribe to each topic and who has processed each message on each topic.

Shiraz Bhaiji
  • 64,065
  • 34
  • 143
  • 252
  • I thought the idea of ServiceBus is that it is subscribers who decide which messages they want to listen. And it's hard to make them not use filters when they are able to use standard API for subscribing... And filters have been introduced to save costs - if some Azure Function needs to process 1% of million-requests per hour, that would be a good cost-saver to apply filter. But really, I came up to the same conclusion that it doesn't look to be supported at the moment and every subscriber will need to respond to every message unfortunately. – Sasha Feb 07 '20 at 16:17
  • The problem comes when you need to differenciate between messages that they want and have not read, and messages that they were not interested in and were filterd away. – Shiraz Bhaiji Feb 07 '20 at 16:41