0

I want to make a system with several communicating processes.

The master process is going raise events. There will be different events each containing structured data. Couple of slave processes are going subscribe to the events, receive the data and invoke appropriate handlers. There are two considerations about my case.

  1. I am NOT concerned about security since there will be no 3rd party services.
  2. I AM concerned about performance.

People suggest using message queues like Zero MQ in this situation. I'm a bit confused the way it should be implemented. As far as i understood ZeroMQ can only send\recieve raw string data.

Should I pack my data into the string (e.g json or xml) on the publisher side, unpack data manually on subscriber side and filter only necessary messages?

If there is a better way to approach my problem I would by very glad to hear it.

Kara
  • 6,115
  • 16
  • 50
  • 57
Pavel Murygin
  • 2,242
  • 2
  • 18
  • 26
  • I have not much personal experience in this area but I have heard good things about Google's Protocol Buffers http://code.google.com/p/protobuf/ , particularly in situations where the various participating processes may be written in different languages – Ken Keenan Mar 06 '13 at 16:57
  • See also this question http://stackoverflow.com/questions/475794/how-fast-or-lightweight-is-protocol-buffer which includes benchmarking info about the .NET implementation of Protocol Buffers – Ken Keenan Mar 06 '13 at 17:05
  • Thanks, @KenKeenan If I understand it right, protobuffers are just the common way of serializing data in different languages. In other words, I will still have to use ZeroMq and manually filter needed messages. I wonder if it's the right way to do the job – Pavel Murygin Mar 06 '13 at 17:48
  • I believe you're correct: you still need some way of passing the serialised messages back and forth. Again, I haven't any experience of ZeroMQ itself, but I have used proprietary message queue products in Java. The big advantage of message queues is that they are queued-- they will store messages if a client is not permanently connected. If you did not need this functionality, you could use raw TCP sockets. – Ken Keenan Mar 06 '13 at 19:01

2 Answers2

0

I would use a messaging provider for the scenario you described. The advantages I see are

1) Don't have to write code to deliver messages to subscribers. That let's me concentrate on my business logic and data format. I can decide on a data format (XML/JSON/Any format that suits my requirement and is understood by subscribers) and publish messages.

2) If a requirement arises the number of subscribers can be increased without adding/modifying any code.

3) Subscribers can be moved to different machines without affecting the solution.

4) Subscribers can be offline also and master/publisher can still publish messages. Subscribers can come in later and call required handlers.

Shashi
  • 14,980
  • 2
  • 33
  • 52
0

These are the options I see:

  1. If the concern is only about the performance, while durability and persistence of the events are not part of the requirements, then pack the data in custom defined protocol structure in a byte array and use sockets for inter process communication. But note - Custom byte array structures are expensive in terms of extensibility and maintainability. Other option is to choose JSON as a data interchange format, which is comparatively more readable.
  2. If the durability and persistence of the event data are part of the requirements, then avail the benefits of Message oriented middleware. Features such as declarative synchronization are some of other benefits MOM can bring in.
Shree
  • 798
  • 5
  • 13