1

I'm using jetty client to make http2 connections with servers which are http2 enabled. I can see jetty opening conections as per need and utilise to exchange data between the endpoints. my snippet is as follows for creating http2 client,

Security.addProvider(new OpenSSLProvider());
SslContextFactory sslContextFactory = new SslContextFactory(true);
sslContextFactory.setProvider("Conscrypt");

sslContextFactory.setProtocol("TLSv1.3");
HTTP2Client http2Client = new HTTP2Client();
http2Client.setMaxConcurrentPushedStreams(1000);
http2Client.setConnectTimeout(30);
http2Client.setIdleTimeout(5);

HttpClient httpClient = new org.eclipse.jetty.client.HttpClient(new HttpClientTransportOverHTTP2(http2Client), sslContextFactory);
httpClient.setMaxConnectionsPerDestination(20);
httpClient.setMaxRequestsQueuedPerDestination(100);
httpClient.start();
httpClient.addBean(sslContextFactory);
httpClient.start();

and later on I use the above created client to exchange http2 requests like

Request request = httpClient.POST("my url goes here");
request.header(HttpHeader.CONTENT_TYPE, "application/json");
request.content(new StringContentProvider("xmlRequest PayLoad goes here","utf-8"));
ContentResponse response = request.send();
String res = new String(response.getContent());

My requirement is, there will 100s of requests multiplexed simultaneously to the same destination. its faster until the load is less but when the load starts increasing the time taken to process requests also starts increasing(sometime 10x).

In such a case I would like to force the jetty client to open multiple tcp connections and distribute the load to different sockets instead of squeezing everything to same opened socket. I have tried the below settings already with different values,

httpClient.setMaxConnectionsPerDestination(20);
httpClient.setMaxRequestsQueuedPerDestination(100);

with no luck.

Is jetty does connection coalescing when more than one connection is opened in jetty? is there a way to open multiple tcp connections and distribute the load so that the processing time is unaffected at the peak load time.

Jetty - 9.4.15, Provider - Conscypt, JDK - jdk.18, OS- Ubuntu/Centos

Thanks in advance

Zyber
  • 428
  • 4
  • 21

1 Answers1

3

Jetty's HttpClient configured with the HTTP/2 transport will open connections as necessary when the number of concurrent streams is exceeded, and this is a server-side configuration parameter.

Say for example that you have configured max_concurrent_streams=1024, you have to push your clients beyond 1024 concurrent requests before HttpClient opens a new connection.

Sometimes large max_concurrent_streams can never be reached by a client (e.g. the client is slower than the server), so additional connections are never opened.

If you want to force to open multiple connections, you have to play with max_concurrent_streams, which is a server-side configuration, and reduce it. After that, you can play with client-side configuration to limit the client by limiting maxConnectionsPerDestination and maxRequestsQueuedPerDestination.

sbordet
  • 16,856
  • 1
  • 50
  • 45
  • But why the processing time of all parallel request increases when number of concurrent request increases to same host? Say, 30 or 40 to same destination – Zyber Mar 05 '19 at 03:40
  • Hard to say without evidence. We have people managing thousands of requests/s with HTTP/2, so your 30/40 is really nothing for Jetty. It could be due to bad thread pool configuration, bad flow control configuration, could be the client not handling the load, or another hundred of reasons. You are welcome to open an [issue to the Jetty project](https://github.com/eclipse/jetty.project/issues), attach your code and debug logs and we can discuss what happens in details. – sbordet Mar 05 '19 at 20:46
  • 1
    Thanks sbordet, but 10s of 1000s of GET request works without a problem, but if its a POST request with a payload of around 1000B each and too much multiplexing of such requests slows down the processing time of each request. I m also doubting that Tomcat(Server Side) is limiting multipexed post request if its payload is > 1000B for some reason. Let me analyse more before opening an unnecessary issue. – Zyber Mar 07 '19 at 04:36