0

I know a MS Sql server can handle up to 32,767 user connections. From different computers. I also know this can be configured from in side the server.

But are the any limiting on the client side from Windows 10 operating system?

I have SQL Server stress program that are running lot of thread to simulate load from Window 10 Client. In a random manner we get connection pool problems.

But when i put the software on server software, not the same as the database it has no problem to simulate the load.

From the running on Window 10 i get this error Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.

Putting same software on a server to server a get no errors.

  • All client OS versions have a connection limit, since Windows 1995 at least. Besides, using a single machine for stress testing is a bad practice because all requests come from the same IP, same cable, same OS stack. Multiple packets may be combined by the stack. Stress testing should be performed from several clients at the same time – Panagiotis Kanavos Sep 14 '16 at 13:28
  • Any url to the connection limit on the client operating system? – Johan Bertilsdotter Sep 14 '16 at 13:31
  • That's a googlable information. In any case, using a single machine is simply the wrong way to do stress testing. Visual Studio's test features allow you to install and control multiple test agents. *And* have them send requests at random intervals – Panagiotis Kanavos Sep 14 '16 at 13:32
  • We are going to make price calculation program is built like the stresstest simulation program. That uses loot of thread to calculate and prisces and open lot of connection to the database. We se the same program as the stress test program. We have tried to google the limit in number of thread... Be fore ware are hitting error. But we articels i talking about the sql server. – Johan Bertilsdotter Sep 14 '16 at 13:39
  • Price calculations do not require multiple connections to a server (I work for a very big OTA). Loading a few rows of data at a time, or trying to execute on the client what should be done on the server is a serious scalability problem. Trying to use multiple database connections in parallel is another way to harm. Load what you want early, or use SQL statements to do the work on the server. Don't try to "speed up" the server by moving the data to the client – Panagiotis Kanavos Sep 14 '16 at 13:46
  • I found similar answer in this post. http://stackoverflow.com/questions/8762678/database-connection-pooling-with-multi-threaded-service – Johan Bertilsdotter Sep 15 '16 at 05:39
  • The problem isn't the pool, it's the behaviour of the application itself. If you want to *throttle* connections you should *throttle* upstream functions as well. You can't do that with a semaphore, unless you want your *functions* to block. You just transfer the block from the pool to the functions. The real alternative is to use a different mechanism, eg TPL Dataflow that allows you to break the pricing steps into blocks that pass messages from one to another – Panagiotis Kanavos Sep 15 '16 at 09:59
  • You can specify input limits to each block and number of parallel tasks to execute. The result is that upstream blocks are forced to wait in a controlled manner if the next block has a full input queue. Without such a mechanism you can easily end up with deadlocks in your code, as one function waits for another to release the semaphore – Panagiotis Kanavos Sep 15 '16 at 10:00

0 Answers0