-2

We have an online menu for restaurants web application (laravel back+vue.js front, MySQL) running on Digital Ocean.

The application is starting to get very slow during peak times so we are investigating what could be the cause.

The first mystery we need to solve is to understand whether or not the problem is in our server specifications (i.e., need more cpu/memory) or code/configuration. It is a mystery because we have conflicting information apparently. The New Relic monitor reports 280% CPU utilization during peak time, but Digital Ocean reports only 30% CPU utilization at the same time.

How to discover what information is correct? What other tools can we use to monitor and discover the bottlenecks?

Daniel Scocco
  • 205
  • 1
  • 2
  • 6
  • 2
    Linux load is not a real percentage. In Linux a system with multiple CPU’s and cores will calculate load by adding the load of each cpu/core and the load on each core can exceed 100%. In other words on a system with 8 cores the load can reach 800% before the system is considered overloaded. On that system a load of 280% would be for instance two CPU’s actually running at 100% utilization and some other processes running on the remaining CPU’s without stressing them in the least. Normalized you would say that system is running at 280/800 = 35% of its capacity – Bob Sep 22 '20 at 13:11
  • 1
    Off topic: Requests for product, service, or learning material recommendations are off-topic because they attract low quality, opinionated and spam answers, and the answers become obsolete quickly. Instead, describe the business problem you are working on, the research you have done, and the steps taken so far to solve it. – TomTom Sep 22 '20 at 13:41
  • 1
    You ask us for tools. At this stage. Hire an admin that is competent enough to know why one reports 280% and the other 30% - makes a VERY good question to wee out people with very little admin knowledge as, as the answer of HermanB shows, that is a trivial and fundamental reason. You do NOT ask for tools here - this site rules make tool recommendations explicitly off topic. – TomTom Sep 22 '20 at 13:43
  • 1
    @TomTom the OP didn't only asked for tools, but also for discovering which information was correct. Not that off-topic imho. – LeRouteur Sep 22 '20 at 14:01
  • @LeRouteur Let me quote the OP: "What other tools can we use to monitor and discover the bottlenecks?" - that looks to me like a question, you know. For tools, you know. – TomTom Sep 22 '20 at 14:03
  • @TomTom Let me quote the OP: "How to discover what information is correct?" - that looks to me like a question, you know. For help, you know. – LeRouteur Sep 22 '20 at 14:27

1 Answers1

2

Quoting @HermanB answer:

Linux load is not a real percentage.
In Linux a system with multiple CPU’s and cores will calculate load by adding the load of each cpu/core and the load on each core can exceed 100%.

In other words on a system with 8 cores the load can reach 800% before the system is considered overloaded.
On that system a load of 280% would be for instance:
two CPU’s actually running at 100% utilization
and some other processes running on the remaining CPU’s without stressing them in the least.

Normalized (to more easily compare Linux systems with different CPU counts) you would say that system is running at 280/800 = 35% of its capacity.

Bob
  • 5,805
  • 7
  • 25
LeRouteur
  • 388
  • 2
  • 16