How to optimize a linux MySQL server for faster queries for tables with many rows (up to 1 bilion).
The server is only used in development so security, availability or recovery is not an issue.
here is an example :
ALTER TABLE link ADD UNIQUE KEY url_index(url(255))
- duration >6h
- links count (~ 20 mil)
- low cpu usage (~19%)
- high IO (iotop mysql returned mostly 600 KB/s read & 500 KB/s write with 99% IO with some spikes to 2-3 MB/s )
links table:
CREATE TABLE `links` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`url` varchar(2000) NOT NULL,
`domain` varchar(255) DEFAULT NULL
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=27765646 DEFAULT CHARSET=utf8;
I have a dedicated linux server (ubuntu 12.10)
- 6 GB of RAM
- 1 TB HDD (non-SSD)
my.cnf (http://pastebin.com/vx6BNqrE)
What can I optimize in terms of software (query/table design and my.cnf or other server configs) and what should be the first to upgrade in terms of hardware other than RAM (unfortunately RAM is not upgradeable as only 3 ram slots are usable).
updatead