0

I'm doing some experiments with my application. My application is in a Docker container and it is programmed to send 4 request every second to a web-server. When I host 30 containers on a single server everything is smooth and working properly. However, when I scale it to 50 containers I can see some performance degradation (number of requests sent decreased to 3 to 2). I check the CPU/Memory utilization and it's quite stable and below 50%. Also the load average for my server is around 4. My guess is that it could be due to excessive context switching but I don't know where to look at to either confirm or deny this. My question is how to detect a software contention on a server? My other question is how to find bottlenecks in general?

PS. I'm using a Linux machine with 4 VCPU cores and 8 GB of RAM.

N-Alpr
  • 113
  • 1
  • 6
  • 1
    What does performance degradation mean? What are you measuring, and what value do you consider bad? Do you control the web server, what do the requests do? – John Mahowald May 09 '20 at 03:30
  • Sorry, By "performance degradation" I meant instead of sending 4 request per second it drops to 3 or 2 requests. Yes I control everything in this setup. So my criteria for performance, is the number of requests to the web servers. I made sure that the web servers are not the bottleneck! – N-Alpr May 09 '20 at 20:52
  • Can you trace (can be as simple as adding proper logging) the individual web request from both the client and the server side and check how long they take? Then identify outliers and see if there's anything suspicious around the time you notice a performance degradation – Juraj Martinka May 11 '20 at 04:20
  • I know there is a performance degradation, and I think the cause is "too much context switching" but I don't know how to prove it? – N-Alpr May 13 '20 at 00:21

0 Answers0