0

I have an API (based on php, and connecting to separate mysql database server) that is called lots of times, and the software that is calling it can wait for the result for a long time and it will not call the API before it gets it's result from the current call, response times also do not really matter here.

The API itself also do not really care about execution times.

So my question is, when the server (ubuntu 16.04 server, Apache, php5) load hits 100%, can the API still run properly with just increased processing times, and latency?

Or will there be some kind of buildup of garbage in the RAM, or simething that will eventually kills the server, and I need to restart it?

Sevron
  • 131
  • 1
  • 5

1 Answers1

0

This should be fine if you don't care about requests being satisfied with increased latency during these times. However, you should be more specific about your CPU usage.

If you look at the output of top you will see several fields for CPU usage, including user, system, Nice, Idle, IOwait, Hard IRQ, soft IRQ, and Steal. You'll also be able to expand your CPUs using 1 to see this usage per-core. This doesn't seem to be immediately relevant to your situation, but it's more of a hint when asking about usage.

CPU usage won't affect RAM garbage collection or fragmentation, and running a server at a high load shouldn't cause it to become unstable if it's tuned properly (the defaults are almost always fine here - it's why they're default).

It's more cost effective to run a server close to maximum load than it is to give it a lot of resources to do nothing with in the interest of making numbers seem pretty. This story begins to change when very low latency is required, but it doesn't change by much.

Spooler
  • 7,046
  • 18
  • 29