5

Here, I am trying to reject the request at the level of jetty server before reaching to servlet. Since as per below configuration Thread Pool has 6 minimum and 10 maximum parallel execution threads and 10 requests can be queued in thread pool where as 55 requests can be queued at connector level. so total is 85 that implies if 200 requests are sent at a time, 115 requests should be rejected. But even if I send 1000 request at a time by jmeter. Jetty server caters all the requests. Below is the piece of code.

Server server = new Server();
QueuedThreadPool threadPool = new QueuedThreadPool();

LinkedBlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>(15);
ExecutorThreadPool pool = new ExecutorThreadPool(6, 10, 30000, TimeUnit.MILLISECONDS, queue);
server.setThreadPool(pool);

threadPool.setMaxThreads(AssuranceConfiguration.annetAssurancesrvConsumerThread);
server.setThreadPool(threadPool);
if (AssuranceConfiguration.annetAssurancesrvSslRequired)
{
    if (logger.isInfoEnabled())
    {
        logger.info("Going to use HTTPS, Initialising SSL context....");
    }
    SslContextFactory sslContextFactory = new SslContextFactory();
    sslContextFactory.setKeyStore(JettyServer.class.getResource(KEYSTOREFILE).getFile());
    sslContextFactory.setKeyStorePassword(AssuranceConfiguration.annetAssurancesrvKeystoreidentifier);
    sslContextFactory.setKeyManagerPassword(AssuranceConfiguration.annetAssurancesrvKeystoremanagerPassword);
    SslSelectChannelConnector sslConnector = new SslSelectChannelConnector(sslContextFactory);
    sslConnector.setPort(AssuranceConfiguration.annetAssurancesrvPort);
    server.setConnectors(new Connector[] { sslConnector });

}
else
{
    if (logger.isInfoEnabled())
    {
        logger.info("Going to use HTTP, Initialising simple context....");
    }
    SelectChannelConnector simpleConnector = new SelectChannelConnector();
    simpleConnector.setPort(AssuranceConfiguration.annetAssurancesrvPort);
    simpleConnector.setMaxIdleTime(30000);
    simpleConnector.setRequestHeaderSize(8192);
    server.setConnectors(new Connector[] { simpleConnector });
    simpleConnector.setAcceptQueueSize(55);
}
Joakim Erdfelt
  • 46,896
  • 7
  • 86
  • 136
  • The relationship between incoming connection and requests isn't that straight forward, your thread configuration will only put an artificial limit on number of requests, but a connection can have 0..n requests on it. Its easy, in your configuration, to consume all of the threads with only 1 connection. – Joakim Erdfelt Oct 19 '16 at 12:24
  • thank for the reply . But how does one connection will cater multiple request . connection is exclusive between server and client .once request is served that particular connection is closed between server and client. – Rajan Kumar Singh Oct 20 '16 at 08:57
  • You are thinking of the older HTTP/0.9 spec which does that. HTTP/1.0 has keep-alive as optional, and HTTP/1.1 has persistent/pipelined connections as default. A HTTP/1.1 connection, for example, can connect, have 0..n requests sent, then it waits for the responses (request handling/threads can be parallelized, but response writing has to be sequential, per spec) – Joakim Erdfelt Oct 20 '16 at 12:35
  • And there is no 1 thread per 1 request handling rule with Servlet 3.0 and above. You can easily have 1 thread handling multiple requests on multiple connections if your Servlet's use the Servlet 3.0+ features. – Joakim Erdfelt Oct 20 '16 at 12:36
  • Thanks again for the valuable info . But in order to meet my requirement what could be the best approach according to you :) – Rajan Kumar Singh Oct 21 '16 at 10:12

0 Answers0