0

I'm trying to wrap my head around building an asynchronous (non blocking) HTTP server using java NIO. I presently have a threadpool implementation and would like to make it into Event Driven with single thread.

How exactly does an Event Driven server work? Do we still need threads?

I've been reading on Java channels, buffers and selectors. So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request. If so, how is it any different from a threadpool implementation.

And if I don't create more threads that can process the request, how can the same thread still keep listening to requests and process them. I'm talking SCALABLE, say 1 million requests in total and 1000 coming in concurrently.

  • 1
    Have you seen [Netty](http://netty.io/)? If not I can't recommend the library highly enough for async networking. You'll still have to wrap your head around async I/O as a concept, but Netty help you reason about your design more clearly, in my opinion. – Dev Feb 11 '15 at 14:50
  • Thanks, I'll try that. But right now, I have to do it without using any library. So I was just looking for the concept to get started. – Aayush Gupta Feb 11 '15 at 14:53

1 Answers1

0

I've been reading on Java channels, buffers and selectors. So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request.

No, the idea is that you process data as it is available, not necessarily using threads.

The complication comes out of the need to handle data as it comes. For instance, you might not get a full request at once. In that case, you need to buffer it somewhere until you have the full request, or process it piecemeal.

Once you have got the request, you need to send the response. Again, the whole response cannot normally be sent at once. You send as much as you can without blocking, then use the selector to wait until you can send more (or another event happens, such as another request coming in).

davmac
  • 20,150
  • 1
  • 40
  • 68
  • does that mean if the server is bombarded with requests, like 1000 concurrent requests and then more until a million, It won't be able to write to any? Does that also mean that the buffer size should be big enough for a 1000 requests? – Aayush Gupta Feb 11 '15 at 15:29
  • Usually requests will buffer at the OS level. I'm not sure what you mean by buffering 1000 requests; you would generally associate a separate buffer with each request (unless you can process the request piecemeal). You can choose which selected channels you wish to process, so there's no reason why you'd not be able to write to channels expecting a response even if you had a huge number of incoming requests. – davmac Feb 11 '15 at 15:50
  • Oh, that makes sense. So, I can make a new channel-buffer pair for every request that I get then? – Aayush Gupta Feb 11 '15 at 15:54
  • @AayushGupta Better to associate a pair of buffers with the connection. You can define yourself a context class that contains them both and that is used as the selection-key attachment so it sticks to the channel. – user207421 Feb 11 '15 at 16:29