My company has a mySQL server used by a team of analysts (usually 3-4 at a time). Lately the queries have slowed down, with some of them taking even days, for a database with tables up to 1 billion rows (10^9 records).
- Server main features: Linux OS-64 GB of memory- 3 Terabytes of hard drive.
We know nothing of fine tuning, so any tool/rule of thumb to find out what is causing the trouble or at least to narrow it down, would be welcome.
Going to Workbench studio>Table inspector I found these key values for the DB that we use the most:
- DB size: ~500 Gbytes
- Largest table size: ~80 Gbytes
- Index length (for largest table): ~230 Gbytes. This index relies on 6 fields.
- Almost no MyISAM tables, all InnoDB
Ideally I would like to fine tune the server (better), the DB (worse), or both (in the future), in the simplest possible way, to speed it up.
My questions:
- Are these values (500, 80, 230 GB) normal and manageable for a medium size server?
- Is it normal to have indexes of this size -230Gb-, way larger than the table itself?
- What parameters/strategy can be tweaked to fix this? I'm thinking memory logs, or buying server RAM, but happy to investigate any sensible answers.
Many thanks.