2

There is a innodb-table with a size of 10 GB (1,31 GB - Data, 8,5 GB - Indexes).

When I start drop table, process is performed 120 seconds, less than 200 seconds.

What is the reason for the slow removal of the table? Or push it in the right direction?

Command SHOW ENGINE INNODB STATUS:

--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
1 RW transactions active inside InnoDB
---OLDEST VIEW---
Read view low limit trx n:o 67277521723
Trx read view will not see trx with id >= 67277521723, sees < 67277521723
Read view individually stored trx ids:
-----------------
Process ID=39136, Main thread ID=139658753697536, state: enforcing dict cache limit
Number of rows inserted 3578198296, updated 1633874218, deleted 240931374, read 1598497576800
0.04 inserts/s, 0.00 updates/s, 0.00 deletes/s, 133.77 reads/s

OS: Ubuntu 16.04.4 LTS Hardware: Intel(R) Xeon(R) W-2145 CPU @ 3.70GHz, RAM 256 GB, SSD only

Global variables:

version 5.7.21-20-log (Percona Server (GPL), Release '20', Revision 'ed217b06ca3')
innodb_adaptive_flushing    ON
innodb_adaptive_flushing_lwm    10
innodb_adaptive_hash_index  ON
innodb_adaptive_hash_index_parts    8
innodb_adaptive_max_sleep_delay 150000
innodb_api_bk_commit_interval   5
innodb_api_disable_rowlock  OFF
innodb_api_enable_binlog    OFF
innodb_api_enable_mdl   OFF
innodb_api_trx_level    0
innodb_autoextend_increment 64
innodb_autoinc_lock_mode    1
innodb_buffer_pool_chunk_size   134217728
innodb_buffer_pool_dump_at_shutdown ON
innodb_buffer_pool_dump_now OFF
innodb_buffer_pool_dump_pct 25
innodb_buffer_pool_instances    64
innodb_buffer_pool_load_abort   OFF
innodb_buffer_pool_load_at_startup  ON
innodb_buffer_pool_load_now OFF
innodb_buffer_pool_size 171798691840
innodb_change_buffer_max_size   25
innodb_change_buffering all
innodb_cleaner_lsn_age_factor   high_checkpoint
innodb_cmp_per_index_enabled    OFF
innodb_commit_concurrency   0
innodb_compressed_columns_threshold 96
innodb_compressed_columns_zip_level 6
innodb_compression_failure_threshold_pct    5
innodb_compression_level    6
innodb_compression_pad_pct_max  50
innodb_concurrency_tickets  5000
innodb_corrupt_table_action assert
innodb_file_format  Barracuda
innodb_file_format_check    ON
innodb_file_format_max  Barracuda
innodb_file_per_table   ON
innodb_fill_factor  100
innodb_flush_log_at_timeout 1
innodb_flush_log_at_trx_commit  2
innodb_flush_method O_DIRECT
innodb_flush_neighbors  1
innodb_flush_sync   ON
innodb_flushing_avg_loops   30
innodb_force_load_corrupted OFF
innodb_force_recovery   0
innodb_ft_aux_table 
innodb_ft_cache_size    8000000
innodb_ft_enable_diag_print OFF
innodb_ft_enable_stopword   ON
innodb_ft_ignore_stopwords  OFF
innodb_ft_max_token_size    84
innodb_ft_min_token_size    3
innodb_ft_num_word_optimize 2000
innodb_ft_result_cache_limit    2000000000

All global variables: https://pastebin.com/6e34WvK7

  • 1
    We have a blog post that explains how with inno_file_per_table on drop can be slow. Currently, I am trying to check if it applies to this version of Percona Server, but you might like to read and perhaps test the suggestion somewhere? https://www.percona.com/blog/2011/02/03/performance-problem-with-innodb-and-drop-table/ When I get an update on version applicability I will update and/or write up an answer. Disclaimer: I work for Percona – greenweeds Jul 09 '18 at 12:54
  • 1
    Again, a comment rather than an answer. Are you using Adaptive Hash Index? One of the team here suggested this could be an issue here. – greenweeds Jul 09 '18 at 16:05
  • @percona-lorraine The all global variables indicates innodb_adaptive_hash_index = ON. Question for you, if your innodb_buffer_pool_size was 171G, how many innodb_buffer_pool_instances would you suggest using? Looking forward to your advice on this specific # instances. I notice Nikolay Edison has not lowered innodb_lru_scan_depth from 1024 that is the default and should be adjusted when you have more than 1 innodb_buffer_pool_instance. 100 is the minimum possible for innodb_lru_scan_depth x 64 instances still = a lot of work every second for the CPU. – Wilson Hauck Jul 10 '18 at 10:32
  • 1
    I randomly changed innodb_lru_scan_depth to 512, but it didn't help. When i running drop table, the command "show engine innodb status" show me next info: https://pastebin.com/egRF5kY8 (checking permissions DROP TABLE ...) – Nikolay Edison Jul 10 '18 at 20:39
  • 1
    Determined that slow dropping is not reproduced on the test server. The problem is relevant for a server that continuously processed various sql-queries – Nikolay Edison Jul 11 '18 at 13:39
  • Your `innodb_io_capacity` is pretty low. This value essentially throttles you to how many IO you can do during InnoDB page flushing. InnoDB must page-mark-delete all the pages in the dropped table (if not already in BP, must load from disk), then must flush out all dirty pages to disk, doing no more than 200 IOps. If you have any sort of RAID or SSD, this value should be closer to 600 or 800. You also have too many BP instances for the amount of BP space allocated. io_cap is divided between BP instances. Seems like you are choking yourself with these settings. – utdrmac Jul 12 '18 at 19:59
  • Could you provide "show create table" that you are dropping (is it partitioned)? Does the filesystem mounted with discard? The show engine innodb status shows "checking foreign keys" - are there lots of tables have FK to this table? – Alexander Rubin Jul 13 '18 at 13:51
  • @percona_lorraine Nikolay Edison has ~168 GB innodb_buffer_pool_size and innodb_buffer_pool_instances=64 please advise on appropriate # of innodb_buffer_pool_instances. When you consider # instances * innodb_lru_scan_depth = CPU workload every second why would you use more than 8 innodb_buffer_pool_instances? – Wilson Hauck Jul 14 '18 at 17:27
  • @NikolayEdison Take your innodb_lru_scan_depth=100 (the minimum) rather than a random number of 512 and you will see noticeable improvement in response time. Your cpu will have only 20% of the workload vs 512 every SECOND for this function. See refman including all comments for innodb_lru_scan_depth, please. – Wilson Hauck Jul 14 '18 at 17:30
  • @NikolayEdison If you will post text results of SHOW GLOBAL STATUS: we will be able to analyze your configuration with much more available information. If you could post complete results of Mysqltuner.pl that would be a real bonus because we would have per ENGINE data space used, table counts and other helpful information to assist with analysis. – Wilson Hauck Jul 14 '18 at 17:42
  • 1
    @AlexanderRubin "show create table": https://pastebin.com/kBGM42aM. Table is not partioned and doesn't have any foreign keys. – Nikolay Edison Jul 16 '18 at 06:58
  • Yesterday I wrote "innodb_io_capacity accelerated drop", but it's wrong because yesterday server rebooted before dropping. Real nothing helped: I set innodb_io_capacity = 700, innodb_lru_scan_depth to 100, but drop is performed as before (120 seconds). – Nikolay Edison Jul 17 '18 at 06:47

0 Answers0