-1

I have an AWS EC2 instance with DUAL-CORE and 4 GB Memory. I have setup my Apache2 HTTP server running PHP "7.0.30" and MySQL "Ver 14.14 Distrib 5.7.22".

There are various devices that are sending GET/POST request to my Http server. Each post and get request using select and update queries.

Right now, there are around 200 devices which are hitting my Http server simultaneously and hitting SQL queries of select and update together. These hits contain data in JSON formats.

The problem is that my MYSQL server has become too much slow. It takes long time to gather data from select queries and load pages.

From phpMyAdmin, I see a number of sleep processes in status for queries. I also have tuned various parameters of my SQL server but no result.

One of the major query that is taking time is update query which is updating long text data in table and is coming from device in every 60 seconds simultaneously and we see its processes empty after a long period of time in MYSQL server status.

Is there a way to optimize it using SQL parameters to keep MYSQL server fast even with 1000s of queries with multiple connections coming to update the table column having long text ?

Most of the Global variables are with default values. I also tried changing values of Various Global variables but it didn't produce any result.

How can I reduce this slow processing of queries?

P.S: I believe the issue is due to Update queries. I have tuned Select queries and they seems fine. But, for UPDATE queries, I see sleep of upto 12 seconds in Processes tab of phpMyAdmin.

I have added link to the image having this issue (Here, you can see sleeps of even 13 seconds, all in UPDATE queries) :

Here is the PasteBin for the query of an UPDATE operation: https://pastebin.com/kyUnkJmz

  • 1
    If you have a question about your queries, it would be helpful if you included the relevant queries in your question. – Patrick Q Jul 16 '18 at 12:25
  • @PatrickQ, Thankyou for the suggestion. But, most of the queries are Select and Update queries like "SELECT * from ". I also tuned queries and changed from "SELECT *" to "SELECT " .... Also, there are Update queries like "UPDATE " .. – Ubaid Ashraf Jul 16 '18 at 12:29
  • If you really don't have any `WHERE` clauses on your queries (and therefore are selecting/updating _all_ rows), then that's going to be one big cause of slowness. See also https://meta.stackoverflow.com/questions/333952/why-should-i-provide-an-mcve-for-what-seems-to-me-to-be-a-very-simple-sql-query – Patrick Q Jul 16 '18 at 12:30
  • @PatrickQ, there is a WHERE clause in my SELECT queries. There are these UPDATE queries which are taking too much time. Under Processes table in phpMyAdmin, i see "sleep" upto 12 seconds. – Ubaid Ashraf Jul 16 '18 at 12:34
  • @PatrickQ, I have updated my question and added a screenshot for better understanding .. – Ubaid Ashraf Jul 16 '18 at 12:46
  • Do you think it's possible for you to make ONE call with all the query? Instead of having 200 `select...` and `update...`, doing only 1 `select ...; update...;` with all the query? – Mickaël Leger Jul 16 '18 at 12:50
  • @MickaelLeger, Thank you for replying. This can't be done. It is because the 'GET/POST' hits are coming from devices .. A device wants an information, it sends a Get request, if the device needs to update some data in the Database, it simply Posts it .. – Ubaid Ashraf Jul 16 '18 at 12:52
  • I don't know how your devices works, but can't you make different "cycle" for each? For example x devices send get/post request at xx:00, y send at xx:05s, z send at xx:10s, etc. ? Just to avoid to have ALL the request in same time – Mickaël Leger Jul 16 '18 at 12:56
  • @MickaelLeger, each device sends Get request after 5 second and post request after 2 minutes. These two requests are in a non-ending loop. There are also other Get/Post request but they are condition-based and not in an infinite loop. It is also not possible to make different cycle for each device, i mean the devices sends when they need to exchange data. Making different cycles seems like losing data and adding delay for devices. – Ubaid Ashraf Jul 16 '18 at 12:59
  • @UbaidAshraf - Show the query in text, _not_ image. And be sure to include the end of the statement, where it has the `WHERE`. And please provide `SHOW CREATE TABLE icp_ap_stats` . How many Updates per second? – Rick James Jul 16 '18 at 19:26
  • @UbaidAshraf - Is that 200 `UPDATEs` every 5 seconds? or every 2 minutes? – Rick James Jul 16 '18 at 19:31
  • @RickJames, 200 devices updates every 5 seconds (a GET hit) and every 2 minutes (a POST hit) – Ubaid Ashraf Jul 17 '18 at 05:32
  • @RickJames, here is the complete query of an UPDATE operation .. https://pastebin.com/kyUnkJmz – Ubaid Ashraf Jul 17 '18 at 06:00
  • Is that 8*2327*25KB=460MB? Or is it 8*200*25KB=40MB for the table? Check via `SHOW TABLE STATUS icx_ap_stats` – Rick James Jul 18 '18 at 06:09

1 Answers1

1

That is ~25KB for the JSON! (Maybe 22KB if backslashes vanish.) And 40 inserts/sec, but more every 2 minutes.

I would like to see SHOW CREATE TABLE, but I can still make some comments.

  • In InnoDB, that big a row will be stored 'off record'. That is, there will be an extra disk hit to write that big string elsewhere.
  • Compressing the JSON should shrink it to about 7K, which may lead to storing that big string inline, thereby cutting back some on the I/O. Do the compression in the client to help cut back on network traffic. And make the column a BLOB, not TEXT.
  • Spinning drives can handle about 100 I/Os per second.
  • The 200 devices every 5 seconds needs to average 40 writes/second in order to keep up. That's OK.
  • Every 2 minutes there are an extra 40 writes. This may (or may not) push the amount of I/O past what the disk can handle. This may be the proximate cause of the "updating for 13 seconds" you showed. That snapshot was taken shortly after a 2-minute boundary?
  • Or are the devices out of sync? That is do the POSTs come all at the same time, or are they spread out across the 2 minutes?
  • If each Update is a separate transaction (or you are running with autocommit=ON), then there is an extra write -- for transactional integrity. This can be turned off (tradeoff between speed and security): innodb_flush_log_at_trx_commit = 2. If you don't mind risking 1 second's worth of data,this may be a simple solution.
  • Is anything else going on with the table? Or is it just these Updates?
  • I hope you are using InnoDB (which is what my remarks above are directed toward), because MyISAM would be stumbling all over itself with fragmentation.
  • Long "Sleeps" are not an issue; long "Updates" are an issue.

More

  • Have an index on usermac so that the UPDATE does not have to slog through the entire table looking for the desired row. You could probably drop the id and add PRIMARY KEY(usermac).
  • Some of the comments above are off by a factor of 8 -- there seem to be 8 JSON columns in the table, hence 200KB/row.
Rick James
  • 135,179
  • 13
  • 127
  • 222
  • Here is the output of command "SHOW CREATE TABLE icx_ap_stats" .. https://pastebin.com/FvF2GrAS ....... Yesterday, I changed table storage engine from InnoDB to MyISAM to check for any improvements but didn't observe any change. – Ubaid Ashraf Jul 18 '18 at 05:34
  • MyISAM will lead to worse and worse fragmentation. You could do `OPTIMIZE TABLE` once a week to clean up the mess. But it will re-fragment quickly. And add `INDEX(usermac)`. – Rick James Jul 18 '18 at 06:01
  • _The index will help the most._ – Rick James Jul 18 '18 at 06:12
  • @RickJames, Thankyou for the useful information and answer. I will make changes and will let you know .. – Ubaid Ashraf Jul 18 '18 at 14:08