3

I'm running MySql server 5.1.52-community on win 2008 R2 x64 standard edition. its production server. when the application layer "another server" introduce too much requests/ tcp connections/ "more than 2000 queries per second" the application layer stops.

the application layer is very stable enterprise product that used in too many enterprises. and the support team clearly said its something wrong with the db server.

so i adjust the mysql config many times and still in heavy load cases the application layer stops.

the server got 16 GB memory but mysql only using about 5 GB. so first question is how to let mysql server service use up to 12 GB.

one odd thing i noticed is that the mysqld process have over than one million handle "1,114,345" and that's very abnormal in which any normal process got maximum 2000 handles! so experts is that ok! if not then how to fix that.

that database is innoDB without Views or SPs.

please help, thanks,

EDIT: after looking to the experts comments here is the current mysql configs:

[client]

port=3306

[mysql]

default-character-set=utf8


# SERVER SECTION
# ----------------------------------------------------------------------
#
# The following options will be read by the MySQL Server. Make sure that
# you have installed the server correctly (see above) so it reads this 
# file.
#
[mysqld]

# The TCP/IP Port the MySQL Server will listen on
port=3306


#Path to installation directory. All paths are usually resolved relative to this.
basedir="C:/Autonomy/MySQL/"

#Path to the database root
datadir="D:/MySQL Datafiles/data/"

# The default character set that will be used when a new schema or table is
# created and no character set is defined
default-character-set=utf8

# The default storage engine that will be used when create new tables when
default-storage-engine=INNODB

# Set the SQL mode to strict
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

# The maximum amount of concurrent sessions the MySQL server will
# allow. One of these connections will be reserved for a user with
# SUPER privileges to allow the administrator to login even if the
# connection limit has been reached.
max_connections=400

# Query cache is used to cache SELECT results and later return them
# without actual executing the same query once again. Having the query
# cache enabled may result in significant speed improvements, if your
# have a lot of identical queries and rarely changing tables. See the
# "Qcache_lowmem_prunes" status variable to check if the current value
# is high enough for your load.
# Note: In case your tables change very often or if your queries are
# textually different every time, the query cache may result in a
# slowdown instead of a performance improvement.
query_cache_size=84M

# The number of open tables for all threads. Increasing this value
# increases the number of file descriptors that mysqld requires.
# Therefore you have to make sure to set the amount of open files
# allowed to at least 4096 in the variable "open-files-limit" in
# section [mysqld_safe]
table_cache=256

# Maximum size for internal (in-memory) temporary tables. If a table
# grows larger than this value, it is automatically converted to disk
# based table This limitation is for a single table. There can be many
# of them.
tmp_table_size=369M


# How many threads we should keep in a cache for reuse. When a client
# disconnects, the client's threads are put in the cache if there aren't
# more than thread_cache_size threads from before.  This greatly reduces
# the amount of thread creations needed if you have a lot of new
# connections. (Normally this doesn't give a notable performance
# improvement if you have a good thread implementation.)
thread_cache_size=8

#*** MyISAM Specific options

# The maximum size of the temporary file MySQL is allowed to use while
# recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE.
# If the file-size would be bigger than this, the index will be created
# through the key cache (which is slower).
myisam_max_sort_file_size=100G

# If the temporary file used for fast index creation would be bigger
# than using the key cache by the amount specified here, then prefer the
# key cache method.  This is mainly used to force long character keys in
# large tables to use the slower key cache method to create the index.
myisam_sort_buffer_size=738M

# Size of the Key Buffer, used to cache index blocks for MyISAM tables.
# Do not set it larger than 30% of your available memory, as some memory
# is also required by the OS to cache rows. Even if you're not using
# MyISAM tables, you should still set it to 8-64M as it will also be
# used for internal temporary disk tables.
key_buffer_size=320M

# Size of the buffer used for doing full table scans of MyISAM tables.
# Allocated per thread, if a full scan is needed.
read_buffer_size=64K
read_rnd_buffer_size=256K

# This buffer is allocated when MySQL needs to rebuild the index in
# REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE
# into an empty table. It is allocated per thread so be careful with
# large settings.
sort_buffer_size=256K


#*** INNODB Specific options ***
innodb_data_home_dir="D:/MySQL Datafiles/"

# Use this option if you have a MySQL server with InnoDB support enabled
# but you do not plan to use it. This will save memory and disk space
# and speed up some things.
#skip-innodb

# Additional memory pool that is used by InnoDB to store metadata
# information.  If InnoDB requires more memory for this purpose it will
# start to allocate it from the OS.  As this is fast enough on most
# recent operating systems, you normally do not need to change this
# value. SHOW INNODB STATUS will display the current amount used.
innodb_additional_mem_pool_size=26M

# If set to 1, InnoDB will flush (fsync) the transaction logs to the
# disk at each commit, which offers full ACID behavior. If you are
# willing to compromise this safety, and you are running small
# transactions, you may set this to 0 or 2 to reduce disk I/O to the
# logs. Value 0 means that the log is only written to the log file and
# the log file flushed to disk approximately once per second. Value 2
# means the log is written to the log file at each commit, but the log
# file is only flushed to disk approximately once per second.
innodb_flush_log_at_trx_commit=2

# The size of the buffer InnoDB uses for buffering log data. As soon as
# it is full, InnoDB will have to flush it to disk. As it is flushed
# once per second anyway, it does not make sense to have it very large
# (even with long transactions).
innodb_log_buffer_size=13M

# InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
# row data. The bigger you set this the less disk I/O is needed to
# access data in tables. On a dedicated database server you may set this
# parameter up to 80% of the machine physical memory size. Do not set it
# too large, though, because competition of the physical memory may
# cause paging in the operating system.  Note that on 32bit systems you
# might be limited to 2-3.5G of user level memory per process, so do not
# set it too high.
#innodb_buffer_pool_size=1243M
innodb_buffer_pool_size=4096M

# Size of each log file in a log group. You should set the combined size
# of log files to about 25%-100% of your buffer pool size to avoid
# unneeded buffer pool flush activity on log file overwrite. However,
# note that a larger logfile size will increase the time needed for the
# recovery process.
innodb_log_file_size=622M

# Number of threads allowed inside the InnoDB kernel. The optimal value
# depends highly on the application, hardware as well as the OS
# scheduler properties. A too high value may lead to thread thrashing.
innodb_thread_concurrency=18
#Enter a name for the query log file. Otherwise a default name will be used.
# incase of remove command # for log sql queries will be logged
#log



#innod table extend
innodb_data_file_path=ibdata1:18M;inno_db_001:400M:autoextend
#innodb_log_group_home_dir="D:/MySQL Datafiles/"
lower_case_table_names=1
innodb_file_io_threads=4
innodb_lock_wait_timeout=50


#replication configuration
log-bin=mysql-bin
server-id = 1

so i think if i changed:
innodb_buffer_pool_size=4096M
to
innodb_buffer_pool_size=12G
12GB will allocated to mysqld service/process. Right?? what else should be changed?

I found also that if the query take more than 15 minutes the application will consider it fail attempt although mysql still working on fetching it! maybe this what cause the handles to go very large!
also i think some of the queries have to read from mapped network drive. but nothing at all can be done to change that. its totally out of my control.

Jawad Al Shaikh
  • 254
  • 1
  • 3
  • 15
  • When you say "the application layer stops" what are you talking about specifically? The application that uses this MySQL Database for storage stops working? Does it have any logging mechanisms? What is happening on the MySQL Server when the application stops working? Is there anything in the MySQL logs when it stops working? – iainlbc Jun 30 '11 at 17:11
  • Is the "one million handles" growing over time? – sleeves Jul 01 '11 at 14:51
  • How big in MB/GB is the data store? – thinice Jul 01 '11 at 18:42
  • @iainlbc 1. yes the application stops, 2. yes it got logging, 3. mysql server still working even after the application stop which is noraml thing, 4. mysql logs shows nothing odd because it didn't stop. the log only show that slow queries ocurrs. some of them take up to 17 minutes. but the query out of our hands to tune if its tunnable. – Jawad Al Shaikh Jul 02 '11 at 05:53
  • @sleeves the handles change little bit after reaching 1,100,000. in other words its stay in the range of 1,100,000. – Jawad Al Shaikh Jul 02 '11 at 05:56
  • @rovangju total db size is about 15 GB, every 7 days its recycled due to the nature of the application requirements. table of text-tracks normally got 100Million records "12GB" with index size 10GB. second big table is binary-tracks "2GB". – Jawad Al Shaikh Jul 02 '11 at 05:59
  • Not to be a cock, but.. "the application layer is very stable enterprise product" with 15+ minutes in one query? – pauska Jul 04 '11 at 13:17
  • @pauska: that got nothing to do with app layer. mysql processing 1500 queries/sec!! so if load is low that wouldn't occur! – Jawad Al Shaikh Jul 04 '11 at 16:08

2 Answers2

4

Handles can be a real headache sometimes. What I usually do with clients with this problem is a quick-and-dirty band-aid solution: run this command:

FLUSH TABLES;

This closes open handles on all tables and opens them back. I am sure that handles that were not properly closed disappear, particularly when a table has dozens of handles open to it and only one or two are in actual use. I have watched handles drop in MONyog charts I have constructed right after I ran this.

You are probably running queries that look quite normal and whose EXPLAIN plan tells you nothing. But, get enough queries together that individually piles up open handles and you can experience race conditions where handles are opening up faster than they are being closed. The visible net effect is a lot of entries into the slow query log whose queries, when executed standalone, works fine. Additionally, you will stockpile queries that simply spin their wheels perform copying to temp tables or some intermittent sorting.

Here is a sample status variable for Handles

Handler_read_last : The number of requests to read the last key in an index. With ORDER BY, the server will issue a first-key request followed by several next-key requests, whereas with With ORDER BY DESC, the server will issue a last-key request followed by several previous-key requests. This variable was added in MySQL 5.5.7.

Now this does not exist in MySQL 5.1.52, but here is a question: Do you have queries that perform ORDER BY ... DESC ??? If you do, then you wish you had Handler_read_last. Since you do not, what would MySQL do with ORDER BY ... DESC ??? It would traverse an entire index to get to the last row, collect the keys and sort them. Are there a lot of queries of this nature in your application ??? (Question for your Research)

Here is another status variable for Handles:

Handler_read_first : The number of times the first entry in an index was read. If this value is high, it suggests that the server is doing a lot of full index scans; for example, SELECT col1 FROM foo, assuming that col1 is indexed.

Another Question for your Research : Do you have a lot of queries that perform full index scans ???

Another piece of speculation : Check your open_files_limit and innodb_open_files. You may need to increase one or both of them if the cap on the number of open files is generating new handles and leaving other handles behind.

RolandoMySQLDBA
  • 16,544
  • 3
  • 48
  • 84
  • thank you very much, please take a look to my comments answering others questions. that might help understand the problem more. – Jawad Al Shaikh Jul 02 '11 at 06:01
  • also check my edit to the question. – Jawad Al Shaikh Jul 02 '11 at 06:17
  • you said: Do you have queries that perform ORDER BY ... DESC. can you tell me how to find out that cuz application layer is closed source. currenly the logs as shown in mysql config put to minimum. can i change config file or by using mysql administrator gui to log what queries executed?. if i changed to 5.5.7 maybe the client [application using odbc,jdbc] will get buggy! is not? which i can't change. – Jawad Al Shaikh Jul 02 '11 at 06:25
  • This is a workaround not a fix :( – symcbean Jul 07 '11 at 10:52
  • @symcbean: yes problem still not fixed but i gave up hope that someone gonna give full solution free of charge so i registered in paid service to get more info. i understand that the question is quite not common otherwise google search would do the job. – Jawad Al Shaikh Jul 07 '11 at 17:56
2

mysqld process have over than one million handle "1,114,345"

Seems very high.

query take more than 15 minutes the application will consider it fail attempt although mysql still working on fetching it!

erk.

and you've got no timeout configured for mysql?

What's the logic tier doing? Does it use persistent connections? Can you switch to non-persistent?

i think some of the queries have to read from mapped network drive

FFS! If the data must exist on a separate server then get MySQL running on that server and use the federated driver or cluster replication. This is the only way (apart from iscsi which doesn't really lend itself to running on top of a filesystem) to get this to behave anything like a well-designed system.

Once you've sorted out the remote drive issue, I'd recommend:

1) using non-persistent connections if possible

2) query tuning - if your queries all go fast enough it won't be a problem. I use some home-brew scripts along with this bit of Perl code to identify what needs worked.

3) run mysqltuner.pl against your server (also uses perl).

There is a free perl implementation for MSWindows here

symcbean
  • 21,009
  • 1
  • 31
  • 52
  • you said: What's the logic tier doing? it's process all sort of information and classify it to be searchable. all connections use mysql odbc driver. can't tell now if connection pooled or persis but is that major issue. cuz i'm looking for core/root fix. – Jawad Al Shaikh Jul 04 '11 at 16:13
  • i test that tuner but no improvement proposed. you got the config file where do you see areas of improvements! – Jawad Al Shaikh Jul 04 '11 at 16:15
  • Did you fix the remote drive issue, investigate/implement non-persistent conenctions and set up a mechanism for analysing query performance first? – symcbean Jul 07 '11 at 10:51
  • sounds like network drive is not part of the game. log analysis is not possible because nothing logged. you got valid point but the other answer was better [version upgrade] although the problem not fully fixed. but bounty was going to over and someone should receive it. – Jawad Al Shaikh Jul 07 '11 at 17:53