2

We are in the process of migrating our databases from an MSSQL Server AWS RDS instance onto an AWS MySQL 8.0.17 RDS instance

Our web application uses ORM/hibernate for database interactions, with 1 application tied to 1 database

the database server currently contains 172 db's, with roughly 260 tables per db (44,479 tables in total), setup with 1 user with access to all db's (there are only 4 other system/aws users returned from "select * from mysql.user")

the process of starting up an application requires ORM to inspect the informaton_schema

unfortunately, this is currently taking over 10 minutes to start a single web application, as the MySQL database seems to be struggling with accessing the information_schema, usually stuck on the status "checking permissions" for upto 5 seconds and also seems to be performing hundreds of these lookups per database

on our staging server, the same web application started up in under a minute, due to only having 8 db's rather than 172

we have since recreated the same slowness issue by adding the additional 164 db's to the staging server, indicating that the issue lies with the number of db's/tables on the server

we've already applied the below settings but this hasn't improved the performance:

innodb_stats_on_metadata=0 innodb_stats_persistent=0

Does anyone have any ideas on how we can optimise MySQL further to get the desired performance.

Any help / advice to speed up our schema queries is greatly appreciated

---- More information ----

Thanks for your replies. As requested, Pastebin URL's are below for more information

Global Status - pastebin.com/Je40S48C Show Variables - pastebin.com/FaN66Zrn

In terms of RAM, the above is taken from a staging server, which is an RDS instance (db.t3.small) so only has 2Gb RAM and 2 vCPU. I am only trying to connect to 2 databases though and the rest are dummy to mimic a live number of tables. We initially noticed this on a db.r5.4xlarge which has 128 Gb RAM and 16 vCPU so i am confident memory or CPU is not an issue. Once our application is up and running

  • 2
    Sounds like maybe [`table_open_cache`](https://dev.mysql.com/doc/refman/8.0/en/table-cache.html) might need to be set to a much larger value to accommodate the behavior of the ORM (which is dubious, they may be reading more than they actually need, but I dislike/distrust ORMs so disregard this sentence). This is a value that has historically been an inverse performer -- the more "optimal" you set it, the worse it got, but I *think* that has been addressed. Take a look. It will of course help the most on the second run but may be faster on the first run since cache evictions are being avoided. – Michael - sqlbot Mar 27 '20 at 17:18
  • 1
    Please provide the `GLOBAL STATUS` and `VARIABLES`. How much RAM? – Rick James Mar 27 '20 at 22:28
  • Additional information request. AWS RDS model being used? RAM size, # cores, any SSD or NVME devices on MySQL Host server? Post on pastebin.com and share the links. From your SSH login root, Text results of: B) SHOW GLOBAL STATUS; after minimum 1 hours UPTIME C) SHOW GLOBAL VARIABLES; D) SHOW FULL PROCESSLIST; for server workload tuning analysis to provide suggestions. – Wilson Hauck Mar 29 '20 at 02:03
  • 1
    Thanks for your replies. As requested, Pastebin URL's are below for more information Global Status - https://pastebin.com/Je40S48C Show Variables - https://pastebin.com/FaN66Zrn In terms of RAM, the above is taken from a staging server, which is an RDS instance (db.t3.small) so only has 2Gb RAM and 2 vCPU. I am only trying to connect to 2 databases though and the rest are dummy to mimic a live number of tables. We initially noticed this on a db.r5.4xlarge which has 128 Gb RAM and 16 vCPU so i am confident memory or CPU is not an issue. Once our application is up and running – Dan Pacitti Mar 31 '20 at 13:53
  • @DanPacitti Thank you for the available data. Analysis is in process. We hope to have suggestions to you in an Answer within 24 hours. – Wilson Hauck Mar 31 '20 at 17:13
  • 1
    @DanPacitti In your 3 hours of SGS uptime, your system ran A) 3 select_full_join per second and 6 select_scan per second. View my profile, Network profile, and in our FAQ page we have listed how to find joins or queries not using indexes. For the logs to be available, in your AWS Parameters group, you will need log_output to be FILE rather than TABLE for the discoveries to be recorded for analysis. 1 minute of logging during your schema review processing will find plenty for you to work on to improve performance. – Wilson Hauck Apr 01 '20 at 11:48
  • Thanks for your suggestions, i will give these a go tomorrow – Dan Pacitti Apr 01 '20 at 18:24
  • @DanPacitti Please provide comment on how this system is performing in 2022 during ORM startup. Additonal RAM saving tip, change temptable_max_ram=256K # from 1G to conserve RAM on your 2G server. Thanks – Wilson Hauck Jun 06 '22 at 12:58

2 Answers2

0

Rate Per Second = RPS

Suggestions to consider for your staging server AWS RDS Parameters group [mysqld] section

innodb_change_buffer_max_size=2  # from 25 (percent) set aside from buffer pool.
innodb_buffer_pool_instances=1  # from 64 - you only have 2GB RAM, to save CPU cycles
innodb_adaptive_max_sleep_delay=20000  # from 150000 for limit of 2 second delay
innodb_io_capacity=1900  # from 200 to use more of SSD IOPS capacity
read_rnd_buffer_size=128K  # from 512K to reduce handler_read_rnd_next RPS of 146,131
read_buffer_size=512K  # from 128K to reduce handler_read_next RPS of 129,135
innodb_open_files=4000  # from 300 to be paired with table_open_cache of 4000
innodb_page_cleaners=2  # from 64 since you will be running with 1 instance
innodb_parallel_read_threads=2  # from 4 for no more than # cores

You should find these changes will significantly reduce CPU BUSY.

You will find on our Utility Scripts page, free downloadable scripts - specifically findfragtables.sql and find-redundant-indexes.sql that will assist with improving performance.

Wilson Hauck
  • 472
  • 5
  • 11
0

I have read this article and strongly suggest trying this for this known bug.

Shane Bester made a suggestion as a comment on my bug report which explains that this is a known bug and will be resolved in the next release. More importantly, there is a workaround that I've tried with great success. Thanks, Shane wherever you are!

internal_tmp_mem_storage_engine=MEMORY

Shane said this:

Thanks for the test data. Found a workaround on 8.0.20 to improve the speed:

SET GLOBAL internal_tmp_mem_storage_engine=MEMORY; It seems this bug is a duplicate of internally filed:

Bug 30562964 : 8.0.18: PERFORMANCE REGRESSION IN SELECT DISTINCT which was introduced in 8.0.18 and fixed in 8.0.21.

link is this https://stackoverflow.com/questions/62469293/database-performance-drop-after-upgrade-to-mysql-8-0-20

Chopper3
  • 101,299
  • 9
  • 108
  • 239