MySQL Tuning Advice

nrep

Well-known member
I've got a new server (Windows 2012R2 / IIS) and I'm seeing if there's anything I can do to tweak MySQL to get a little bit more out of it. It runs well on the current config, but I'd appreciate advice on further optimisations. The server runs a mixture of XF/VB/Wordpress sites of varying sizes. InnoDB used virtually everywhere, even for the XF search index as MySQL 5.6 supports fulltext on InnoDB - hence the high innodb_buffer_pool_size.

Server specs:

Dual Processor Hex Core Xeon 2620
32GB DDR4
RAID 1 - 2 x 400GB SSD
Windows 2012R2
IIS
MySQL 5.6
PHP 5.5

Here's the MySQL Tuner Advice:

Code:
MySQL Tuner 0.7 - Peter Chapman <peter@conglomo.co.nz>
Currently running supported MySQL version 5.6.21-log
Operating on 64-bit architecture
Archive Engine Installed
Berkeley DB Engine Not Installed
Federated Engine Not Installed
InnoDB Engine Installed
ISAM Engine Not Installed
NDBCLUSTER Engine Not Installed
Data in InnoDB tables: 92G (Tables: 12654)
Data in MEMORY tables: 137M (Tables: 196)
Data in MyISAM tables: 18M (Tables: 44)
Total fragmented tables: 987
All database users have passwords assigned
Up for: 15d 22h 5m 5s (827M q [601.000 qps], 46M conn, TX: 16885G, RX: 212G)
Reads / Writes: 73% / 27%
Total buffers: 22.3G global + 3.5M per thread (500 max threads)
Maximum possible memory usage: 24.0G (75% of installed RAM)
Slow queries: 1% (657/827M)
Highest usage of available connections: 68% (336/500)
Key buffer size / total MyISAM indexes: 128.0M/193.0K
Key buffer hit rate: 99% (195K cached / 16 reads)
Query cache efficiency: 59% (341M cached / 584M selects)
Query cache prunes per day: 10502223
Sorts requiring temporary tables: 1% (214K temp sorts / 26M sorts)
Joins performed without indexes: 7349003
Temporary tables created on disk: 46% (2M on disk / 5M total)
Thread cache hit rate: 99% (336 created / 46M connections)
Table cache hit rate: 2% (10K open / 362K opened)
Open file limit used: 0% (100/67K)
Table locks acquired immediately: 99% (504M immediate / 504M locks)
InnoDB data size / buffer pool: 93.0G/22.0G
Run OPTIMIZE TABLE to defragment tables for better performance
Adjust your join queries to always utilize indexes
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_cache gradually to avoid file descriptor limits
query_cache_size (> 128M)
join_buffer_size (> 1.0M, or always use indexes with joins)
tmp_table_size (> 256M)
max_heap_table_size (> 16M)
table_cache (> 10240)
innodb_buffer_pool_size (>= 92G)
Scan Complete

My.ini:

Code:
default-character-set=latin1
default_storage_engine         = InnoDB

[mysqld]

bind-address = 0.0.0.0
port=3307
character-set-server=latin1
default-storage-engine=INNODB
sql-mode="NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
max_connections=500
query_cache_size=128M

table_open_cache=10240
tmp_table_size=256M
thread_cache_size=512
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=128M
key_buffer_size=128M
read_buffer_size=1M
read_rnd_buffer_size=1M
sort_buffer_size=2M

innodb_additional_mem_pool_size=26M
innodb_flush_log_at_trx_commit=2
innodb_log_buffer_size=4M
innodb_buffer_pool_size=22G
innodb_log_file_size=256M
innodb_thread_concurrency=0
innodb_autoextend_increment=64M
innodb_concurrency_tickets=5000
innodb_old_blocks_time=1000
innodb_open_files=300
innodb_stats_on_metadata=0
innodb_file_per_table=1
innodb_checksum_algorithm=0
back_log=70
flush_time=0
join_buffer_size=1M
max_allowed_packet = 64M
max_connect_errors=1000000
open_files_limit=65535
query_cache_type=1
sort_buffer_size=256K
table_definition_cache=1400
binlog_row_event_max_size=8K
sync_master_info=10000
sync_relay_log=10000
sync_relay_log_info=10000
transaction_alloc_block_size = 8M
transaction_prealloc_size = 4M
query_cache_limit = 4M
query_prealloc_size = 262144
bulk_insert_buffer_size = 8M
query_alloc_block_size = 256M

I should perhaps add that I'm considering disabling the query cache to see how that goes, considering I'm using MySQL 5.6.
 
tmp_table_size = 96M
max_heap_table_size = 96M

You have heap table set to 16M. Increasing to 96M will create less temp tables for you.
 
Thanks hellreturn, I'll give that a try. How would I go about monitoring the chance, would monitoring this variable be a good indicator?:

"Temporary tables created on disk: 46% (2M on disk / 5M total)"

Will those setting changes decrease the number of temp tables requested AND reduce the number created on disk? I hope you don't mind the questions, I just enjoy learning as I make changes!

edit: I've made the changes now and restarted MySQL to see how things go :).
 
Last edited:
From what I have observed, till 128MB if you have spare memory it's good. Anything above 256MB doesn't make much difference.

Yup just run tuner script once every 24 hrs and monitor results. You can increase the value safely up to 256MB but do note, that it will increase your memory usage as well.
 
Thanks, I'll do just that and monitor things :).

Does everything else look ok? I tried disabling the query cache as I've read about that elsewhere, but it impacted performance significantly when I did that.
 
That tuner script actually showed that I was missing query cache and thread cache entirely

*shakes fist at upstream devs*
 
I ran the script, and got this
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +Aria +BLACKHOLE +CSV +FEDERATED +InnoDB +MRG_MyISAM
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[--] Data in MEMORY tables: 252K (Tables: 6)
[--] Data in InnoDB tables: 319M (Tables: 453)
[--] Data in MyISAM tables: 821K (Tables: 7)
[!!] Total fragmented tables: 7

-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics -------------------------------------------------
[--] Up for: 6h 21m 46s (918K q [40.120 qps], 18K conn, TX: 10B, RX: 107M)
[--] Reads / Writes: 98% / 2%
[--] Total buffers: 448.0M global + 1.7M per thread (151 max threads)
[OK] Maximum possible memory usage: 698.1M (34% of installed RAM)
[OK] Slow queries: 0% (0/918K)
[!!] Highest connection usage: 92% (139/151)
[OK] Key buffer size / total MyISAM indexes: 16.0M/997.0K
[!!] Key buffer hit rate: 91.7% (36 cached / 3 reads)
[OK] Query cache efficiency: 44.6% (674K cached / 1M selects)
[!!] Query cache prunes per day: 71440
[OK] Sorts requiring temporary tables: 0% (3 temp sorts / 34K sorts)
[!!] Temporary tables created on disk: 30% (6K on disk / 20K total)
[OK] Thread cache hit rate: 96% (550 created / 18K connections)
[!!] Table cache hit rate: 18% (64 open / 339 opened)
[OK] Open file limit used: 0% (8/1K)
[OK] Table locks acquired immediately: 99% (188K immediate / 188K locks)
[OK] InnoDB buffer pool / data size: 384.0M/319.7M
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
MySQL started within last 24 hours - recommendations may be inaccurate
Reduce or eliminate persistent connections to reduce connection usage
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_open_cache gradually to avoid file descriptor limits
Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Variables to adjust:
max_connections (> 151)
wait_timeout (< 28800)
interactive_timeout (< 28800)
query_cache_size (> 20M)
tmp_table_size (> 16M)
max_heap_table_size (> 16M)
table_open_cache (> 64)

If query cache doesn't do anything, why is it getting such active load?


[Update]
I just read about query cache and innodb playing traffic cop

Apparently it can still be useful on read-mostly applications with lots of the same requests (like wordpress) but not be useful on a forum.
 
Last edited:
Query cache isn't actually useful for InnoDB

I've tried disabling it a couple of times, but page generation times increase by about 30% when I disable it. Would that mean I've got a problem elsewhere? Here's what I used to disable it in real-time to perform the test:

set global query_cache_type=0;
flush query cache;
reset query cache;
 
You should increase buffer pool size when disabling query cache.
Also add innodb_flush_method=O_DIRECT
 
Yeah, it's a Windows server so I can't change it. I'm also limited with the InnoDB buffer pool size, as I'm approaching the limits of what I can spare. That same server does mail/iis too. I could probably increase it my a GB or two, but that's about it.

Does this mean there's not much wiggle room for improvement at the moment (given the current hardware)?
 
Thanks, I've got innodb_io_capacity increased to 2000 (and max to 5000) as a test. Innodb_file_per_table is already active too.
 
Okay, I guess you just need to optimize everything else. What is the output of show global status like 'opened_tables'; ? table_cache should be table_open_cache btw,
 
I'd decrease wait_timeout and interactive_timeout to 300 also. Default is 28800.
As for thread_cache, run these,

Code:
SHOW GLOBAL STATUS LIKE 'Connections';
Code:
SHOW GLOBAL STATUS LIKE 'Threads_created';
Code:
SHOW GLOBAL STATUS LIKE 'Max_used_connections';

then post the values here.
 
Cheers, will tweak that. Here are the results:

Connections:202
Threads_created:4
Max_user_connections:4
 
Top Bottom