0

Currently we have 2 load balanced web servers. We are just starting to expose some functionality over NSB. If I create two "app" servers would I create a cluster between all 4 servers? Or should I create 2 clusters?

i.e.

Cluster1: Web Server A, App Server A

Cluster2: Web Server B, App Server B

Seems like if it is one cluster, how do I keep a published message from being handled by the same logical subscriber more than once if that subscriber is deployed to both app server A and B?

Is the only reason I would put RabbitMQ on the web servers for message durability (assuming I didn't have any of the app services running on the web server as well)? In that case my assumption is that I am then using the cluster mirroring to get the message to the app server. Is this correct?

Ryan L
  • 3
  • 2
  • Ryan, I've added an answer but your question is not very clear. Is this enough information for you to resolve your question(s)? – Ramon Smits Jun 03 '15 at 09:40

1 Answers1

0

Endpoints vs Servers

NServiceBus uses the concept of endpoints. An endpoint is related to a queue on which it receives messages. If this endpoint is scaled out for either high availability or performance then you still have one queue (with RabbitMQ). So if you would have an instance running on server A and B they both (with RabbitMQ) get their messages from the same queue.

I wouldn't think in app servers but think in endpoints and their non functional requirements in regards to deployment, availability and performance.

Availability vs Performance vs Deployment

It is not required to host all endpoints on server A and B. You can also run service X and Y on server A and services U and V on server B. You then scale out for performance but not for availability but availability is already less of an issue because of the async nature of messaging. This can make deployment easier.

Pubsub vs Request Response

If the same logical endpoint has multiple instances deployed then it should not matter which instance processes an event. If it is then it probably isn't pub sub but async request / response. This is handled by NServiceBus by creating a queue for each instance (with RabbitMQ) on where the response can be received if that response requires affinity to requesting instance.

Topology

You have:

  • Load balanced web farm cluster
  • Load balanced RabbitMQ cluster
  • NServiceBus Endpoints
    • High available multiple instances on different machines
    • Spreading endpoints on various machines ( could even be a machine per endpoint)
    • A combination of both

Infrastructure

You could choose to run the RabbitMQ cluster on the same infrastructure as your web farm or do it separate. It depends on your requirements and available resources. If the web farm and rabbit cluster are separate then you can more easily scale out independently.

Ramon Smits
  • 2,482
  • 1
  • 18
  • 20