0

I have a requirement to process up to 5000 in 10 seconds. The problem is that there seems to be bottleneck somewhere in Kestrel. The more simultaneous client requests I have the more delay I have on reading body of the first request. The minimalist test console app is below:

public interface IMyHandler
{
    Func<HttpContext, string> Handler { get; set; }
}

public class MyHandler : IMyHandler
{
    public Func<HttpContext, string> Handler { get; set; }
}

public class Startup : IStartup
{
    IMyHandler MyHandler;
    public Startup(IMyHandler h)
    {
        MyHandler = h;
    }

    public void Configure(IApplicationBuilder app)
    {
        var this_ = this;
        app.Run(context =>
        {
            MyHandler.Handler(context);
            var response = String.Format("Hello, Universe! It is {0}", DateTime.Now);
            return context.Response.WriteAsync(response);
        });
    }

    public IServiceProvider ConfigureServices(IServiceCollection services)
    {
        // Add framework services
        return services.BuildServiceProvider();
    }
}

class Program
{
    static void Main(string[] args)
    {
        EventWaitHandle success = new AutoResetEvent(false);
        var sentRequests = 0;
        var serverTask = Task.Run(() =>
        {

            MyHandler h1 = new MyHandler();
            h1.Handler = (context) =>
            {
                using (BufferedStream buffer = new BufferedStream(context.Request.Body))
                {
                    using (var reader = new StreamReader(buffer))
                    {
                        var body = reader.ReadToEnd();
                        var _sent = sentRequests;
                        //success.Set();
                    }
                }
                return "OK";
            };

            var host = new WebHostBuilder()
                .UseKestrel((context, options) =>
                {
                    //options.ApplicationSchedulingMode = SchedulingMode.Inline;
                    options.ListenAnyIP(21122, listenOptions =>
                    {
                    });
                })
                .UseContentRoot(Directory.GetCurrentDirectory())

                 .ConfigureServices(services =>
                 {
                     services.Add(Microsoft.Extensions.DependencyInjection.ServiceDescriptor.Singleton(typeof(IMyHandler), h1));
                 })
                //.UseIISIntegration()
                .UseStartup<Startup>()
                //.UseApplicationInsights()
                .Build();

            host.Run();
        });
        Enumerable.Range(0, 100).ForEach((n) => {
            var clientTask = Task.Run(async () =>
            {
                var handler = new HttpClientHandler();
                HttpClient client = new HttpClient(handler);
                sentRequests++;
                var p1 = await client.PostAsync("http://localhost:21122", new StringContent("{test:1}", Encoding.UTF8, "application/json"));
            });
        });

        Assert.IsTrue(success.WaitOne(new TimeSpan(0, 0, 50)));
    }
}

For the gived 100 requests I already have about 12 seconds of delay on first body read. To see this just set breakpoint on line

var _sent = sentRequests;

Also the sentRequests is equal to 100 which means (this is actually a bit of speculation but seems to be plausible) that at this moment all the requests are sent. This looks to me as if the two things were done by single thread and while it accepts requests it doesn't start reading them.

Any ideas how to overcome this?

alehro
  • 2,198
  • 2
  • 25
  • 41
  • 1
    Well, one issue is you're likely exhausting the connection pool. `HttpClient` should be treated mostly as a singleton. The connection it creates is left open, so when you all of a sudden new up 100 `HttpClient` instances, that's a 100 different connections now open. It's likely having to wait for the connection to timeout before it can process the next. Try moving your `HttpClient` instantiation out of the loop and use the same client for each request instead. – Chris Pratt Sep 18 '18 at 13:01
  • @ChrisPratt - but if they move to a single `HttpClient` they're liable to hit the host connection "limit" instead. – Damien_The_Unbeliever Sep 18 '18 at 13:48
  • No? Honestly, not sure what you mean, but the right way is a single client. That is one major bottleneck you're actually creating in calling code, not the API. – Chris Pratt Sep 18 '18 at 13:52
  • @ChrisPratt, use of single HttpClient indeed fixed the problem. Though I don't think that connection pool was exhausted. Google gives various results but even for client Windows it should be more than 1000. Also the delay (smaller one) can be observed with 30 clients. So, I conclude that there is some slow thinking part in the system which creates the delay. But if I use the same HttpClient then the socket gets reused and my test of many simultaneous connections becomes not correct. That returns me to the original problem of how to test 5000 client requests per 10 seconds situation. – alehro Sep 18 '18 at 14:20
  • @ChrisPratt, please add your comment as an answer if you like – alehro Sep 18 '18 at 14:21

0 Answers0