The reason many connections are open with a sleep status is, that by default, MySqlConnector which is used by Pomelo, has connection pooling enabled (Pooling=true
), with a max. of 100 connections per connection string (MaxPoolSize=100
). See MySQL .NET Connection String Options for all default settings.
So having around 60 connections opened can easily happen, when either 60 people use the app API in parallel, or e.g. when 3 different connection strings are used, each with 20 users in parallel.
Once those connections have been opened, they will effectively stay open for a long time by default.
With ConnectionLifeTime=0
as the default setting, they will never be explicitly closed by MySqlConnector's connection pooling management. However, they will always be closed after a number of seconds specified by the MySQL system variable wait_timeout. But since this variable is set to 28800
by default, it will take 8 hours before once created pooled connections are being closed by MySQL (independent of how long they were in a sleep state).
So, to lower the number of parallel connections, either disable Pooling
(radical method with some performance implications if the server is not hosted locally) or manage their lifetime through the MaxPoolSize
and ConnectionLifeTime
connection string parameters.
In the screenshot you provided, you can see that the host varies in its address and port, but these are the address and outgoing port of the connecting client.
However, what I have written above does also apply here. So if you are connecting from multiple web servers (clients from the viewpoint of MySQL) to your database server, by default, each web server will manage its own connection pool and keep up to 100 connections open.
For example, if you have a load balancer setup, and multiple web servers behind it that may each execute database queries, or you run your API with some server-less technology like AWS Lambda, where your API might be hosted on many different web servers, then you might end up with multiple connection pools (one for each connection string on each web server).
My issue is that is only for one user connected. I am using Task.WhenAll()
in my app which launches several api requests in parallel and I also use asynchronous methods in my api which creates some new threads.
If your client sends multiple web request (e.g. REST) to your API hosted on multiple web servers/AWS Lambda, then MySQL will at least need that many connections opened at the same time.
The MySqlConnector connection pooling will keep those connections open by default, after they were used (up until 100 connections per web server).
You will end up with the following number of sleeping database connections (assuming the connection string always stays the same on each web server):
number-of-parallel-requests-per-client * number-of-clients * number-of-webservers
However, the max. number of connections kept open will (by default) not be higher than:
number-of-webservers * 100
So if your app executes 20 requests in parallel, with each of them establishing a database connection from the API (web server) to the database server, then depending on your API hosting scenario, the following will happen:
3 Web Servers with Load Balancer: If you run your app often enough, the load balancer will distribute your requests to all 3 web servers over time. So each of the 3 web servers will keep around 20 connections open to your database server. You end up with 60 connections, as long as only one client executes these requests at the same time. If a max. of 4 clients run 20 requests in parallel, you will end up with 80 connections being kept open per web server over time, so a total of 3 * 80 = 240 total sleeping database connections.
Serverless technology like AWS Lambda: The same as in the previous example applies to AWS Lambda, but the number of web servers is infinite (in theory). So your might end up exceeding MySQL's max_connections setting pretty fast, if AWS decides to distribute the API calls to many different web servers.
Advice how to configure connection pooling
If you run your API on a fixed number of web servers, you might want to keep Pooling=true
and set MaxPoolSize
to a values, that number-of-parallel-requests-per-client * number-of-webservers
will always be lower than max_connections, but if possible higher than typical-number-of-parallel-clients * number-of-parallel-requests-per-client
. If that is not plausible or possible, consider setting Pooling=false
or set MaxPoolSize
to a number not higher than max_connections / number-of-webservers
.
If you run your API on a serverless technology like AWS Lambda, either set Pooling=false
or set MaxPoolSize
to some low value and ConnectionLifeTime
to some low value greater than zero. I would just set Pooling=false
, because it is way easier to disable connection pooling than to effectively fine tune connection pooling for a serverless environment.