6

I wonder if I can do request-reply with this:

  • 1 hazelcast instance/member (central point)
  • 1 application with hazelcast-client sending request through a queue
  • 1 application with hazelcast-client waiting for requests into the queue

The 1st application also receives the response on another queue posted by the second application.

Is it a good way to proceed? Or do you think of a better solution?

Thanks!

unludo
  • 4,912
  • 7
  • 47
  • 71

5 Answers5

3

The last couple of days I also worked on a "soa like" solution using hazelcast queues to communicate between different processes on different machines.

My main goals were to have

  1. "one to one-of-many" communication with garanteed reply of one-of-the-many's

  2. "one to one" communication one way

  3. "one to one" communication with answering in a certain time

To make a long story short, I dropped this approach today because of the follwoing reasons:

  1. lots of complicated code with executor services, callables, runnables, InterruptedException's, shutdown-handling, hazelcast transactions, etc

  2. dangling messages in case of the "one to one" communciation when the receiver has shorter lifetime than the sender

  3. loosing messages if I kill certain cluster member(s) at the right time

  4. all cluster members must be able to deserialize the message, because it could be stored anywhere. Therefore the messages can't be "specific" for certain clients and services.

I switched over to a much simpler approach:

  1. all "services" register themselves in a MultiMap ("service registry") using the hazelcast cluster member UUID as key. Each entry contains some meta information like service identifier, load factor, starttime, host, pid, etc

  2. clients pick a UUID of one of the entries in that MultiMap and use a DistributedTask (distributed executor service) for the choosen specific cluster member to invoke the service and optionally get a reply (in time)

  3. only the service client and the service must have the specific DistributedTask implementation in their classpath, all other cluster members are not bothered

  4. clients can easily figure out dead entries in the service registry themselves: if they can't see a cluster member with the specific UUID (hazelcastInstance.getCluster().getMembers()), the service died probably unexpected. Clients can then pick "alive" entries, entries which fewer load factor, do retries in case of idempotent services, etc

The programming gets very easy and powerful using the second approach (e.g. timeouts or cancellation of tasks), much less code to maintain.

Hope this helps!

Peti
  • 1,670
  • 1
  • 20
  • 25
  • I think it's a good idea to make the bus manage the lifetime/ttl of a request. What I tried to avoid though is to have too many hazelcast nodes as it slows overall communication because of replication needed between the nodes. By the way are you client notified by the bus, or do you perform a king of polling? – unludo Feb 15 '13 at 16:05
  • In my first approach - the queue based approach - the clients used IQueue#take on a special-only-for-this-single-reply-IQueue instance. – Peti Feb 16 '13 at 19:32
2

In the past we have build a SOA system that uses Hazelcast queue's as a bus. Here is some of the headlines.

a. Each service has an income Q. Simply service name is the name of the queue. You can have as many service providers as you wish. You can scale up and down. All you need is these service providers to poll this queue and process the arrived requests.

b. Since the system is fully asynchronous, to correlate request and response, there is also a call id both on request and response.

c. Each client sends a request into the queue of the service that it wants to call. The request has all the parameters for the service, a name of the queue to send the response and a call id. A queue name simply can be the address of the client. This way each client will have it's own unique queue.

d. Upon receiving the request, a service provider processes it and sends the response to the answer queue

e. Each client also continuously polls its input queue to receive the answers for the requests that it send.

The major drawback with this design is that the queues are not as scalable as maps. Thus it is not very scalable. Hoever it still can process 5K requests per seconds.

Fuad Malikov
  • 2,325
  • 18
  • 20
  • Thanks for your feedback. Regarding a., I also used a queue but instead of a poll, did a take(), meaning it is blocking. c. You mean a client type will have its queue? If you really have one queue per client, then you create direct connection and loose the 'module' idea where you don't need to know the others to ask services. – unludo Feb 11 '13 at 15:07
2

I made a test for myself and validated that it works well with certain limitation.

The architecture is Producer-Hazelcast_node-Consumer(s)

Using two Hazelcast queues, one for Request, one for Response, I could mesure a round trip of under 1ms.

Load balancing is working fine if I put several consumers of the Request queue.

If I add another node, and connect the clients to each node, then the round trip is above 15ms. This is due to replication between the 2 hazelcast nodes. If I kill a node, the clients continue to work. So failover is working at the cost of time.

unludo
  • 4,912
  • 7
  • 47
  • 71
1

Can't you use the correlation id to perform request-reply on a single queue in hazelcast? That's the id that should uniquely define a conversation between 2 providers/consumers of a queue.

Kurt Du Bois
  • 7,550
  • 4
  • 25
  • 33
0

What is the purpose of this setup @unludo ?. I am just curious

Hazel_arun
  • 1,721
  • 2
  • 13
  • 17
  • The idea is to have a fast modular asynchronous architecture. Modular = each module don't need to know each other, each module works with the bus for request/reply and is then independant. It could be done with ActiveMQ but it is not fast enough I think. – unludo Feb 11 '13 at 15:13