0

I am writing a messaging system using Netty. I cannot send a subsequent message before the first message has been sent successfully (and at times wait for a response of the send from the peer endpoint). I see recommendations not to wait for future to be completed in ChannelHandlerAdapter, as it chews up cpu cycles in the EventLoop.

The question then is - How do I achieve this sequential logic without waiting for the first send to complete in the ChannelHandlerAdapter EventLoop or avoid a thread context switch between an application thread and eventloop thread?

a) If I wait for the ChannelFuture to complete in ChannelHandlerAdapter, it chews up cpu cycles. If I send the subsequent messages in the listener registered with the channelFuture of a write, by having application logic in the listener registered with ChanelFuture of this channel, this will chew up cpu cycles too in the EventLoop.

or,

b) If I use the channel in an application thread and write to this channel, there is a thread context switch from Application thread to the Channel thread. Is there a way to avoid this thread context switch in this use-case?

Is there a better way?

Sunny
  • 21
  • 1
  • 3

1 Answers1

0

Ideally, you can do everything without leaving the I/O thread at all if your application is fully asynchronous.

This usually requires:

  • you are familiar with dealing with futures. An asynchronous operation usually returns a future or requires a callback as a parameter. You'll have to add a listener to the future or specify a proper callback implementation so that the action you desire to perform is performed when the requested asynchronous operation is finised. With proper coding, you should never need to call future.await() or similar blocking calls on the future.
  • the libraries you use are asynchronous.
trustin
  • 12,231
  • 6
  • 42
  • 52
  • In a messaging system when sending messages, the sender cannot send subsequent messages unless he is assured that the message has landed on the remote Messaging Queue. And if this be the case then he can only send the subsequent message in the future callback. And this in essence implies that I have to block for the first message to be sent successfully to the remote endpoint. This defeats the purpose of registering a listener. I need not register a future listener. Instead I just block for the future to be completed. – Sunny Apr 12 '15 at 06:19
  • And if the recommendation is not to block in the event loop, then this can be done in an application thread. But that would imply an extra context switch. This extra context switch may not be acceptable to latency sensitive applaications – Sunny Apr 12 '15 at 06:28
  • The above scenario that I mentioned is for clients. On server side, we use futures listeners for any blocking IO as the server is serving many clients and we do not want to hold the thread hostage(not even the non event loop threads). The clients are still blocked on an outstanding message send until the future listeners on servers respond back and unblocks the clients – Sunny Apr 12 '15 at 14:56
  • You can send the subsequent message in your future listener. – trustin Apr 14 '15 at 03:27
  • That is what I am doing currently. But the fact that I have to wait for the first message to be sent successfully on the client , why not just send it in the event loop itself on the client. This event loop is serving only one producer/stream on client and the application cannot afford a context switch on the client due to the nature of the stock trading application this system is serving and is very sensitive to latency. – Sunny Apr 16 '15 at 05:58