0

For a long time I’ve wanted to know how MySQL scales as you add more memory to the server.We looked for a balance between utilizing the hardware as much as possible, limiting the system complexity, and lowering the price-to-performance ratio.Please suggest proper solution for this issue.

Jayakrishnan T
  • 298
  • 2
  • 8
  • 22

3 Answers3

3

There is no one-size-fits-all solution to every environment, but the process of performance tuning is always the same. Performance tuning is an iterative process:

  1. Benchmark and track metrics
  2. Find bottlenecks
  3. Resolve bottlenecks
  4. Repeat

As you resolve each bottleneck, you may find that another part of the system is now the limiting factor. That's why you need to repeat the process until you're happy with the outcome.

The main thing to look for at all levels of performance analysis is latency. Which activities are taking up the most time? Which activities are in the critical path of user activity? Latency in the critical path is a direct measure of user pain, whereas other metrics like throughput, iops, cpu and memory usage don't have an obvious meaning.

Identify your slow queries and optimize them -- if possible, re-write the query or add indexes, but if not, add hardware. If your metrics show that most of a query's time is spent waiting to read from disk, then adding memory to increase the cache will help resolve that bottleneck. If most of the time is spent waiting to write synchronously to disk, get more disks or get faster disks (e.g. SSDs). If most of the time is spent waiting for CPU time or on-CPU, get more or faster CPUs.

If you have the opportunity to run your database on a system with DTrace, read Brendan Gregg's DTrace book and be enlightened.

Tom Shaw
  • 3,752
  • 16
  • 23
2

For a long time I’ve wanted to know how MySQL scales as you add more memory to the server

Pretty much like Oacle, SQL Server and all other database servers. The physics of database operations udner ACID conditions dont change.

We looked for a balance between utilizing the hardware as much as possible, limiting the system complexity, and lowering the price-to-performance ratio.Please suggest proper solution for this issue.

What is the optimal car?

Same answer: IT DEPENDS. You dont say anything relevant to answer, voted to close.

See, I run a sql server. SMALL installation - ONLY 16gb memory, 4 cores, 8 fast hard discs and 1 ssd + boot drive. Yes, this is small - in another world (anothe contract) i work on an oracle exadata with 21000 gb database space that costs more than most super sports cars. Because we need it. My next upgrade to my sql serve rwill have place for 80 discs, 128gb memory and get a lot more ssd. Most people here would consider this big, I consider it a nice lower end server.

You may be in a league where you think a 60gb SSD is expensive. You dont tell. You say nothing about what you need to do, how much load you have. What do you expect us to answer?

The only sensible answer is: tune your bottlenexks, and plan for expansion. For example SuperMicro has nice 4 rack unit high server cases for up to 72 discs, plus a mobherboard. 2 rack units = 24 discs. One of them gives you in computer scalability. Get multi processsor CPU board wise, thn plug in one. Same with RAM. Resolve bottlnecks as they appear. Know what you do ;) Reducing system complexitiy is a futile operation when you get higher end database. This is like a 5 star cook saying he dont want to cook and rather prefers prefab food. Database systems of higher performance ARE complex. Deal with it.

TomTom
  • 51,649
  • 7
  • 54
  • 136
1

It depends on the engine you are using. The most common MyISAM engine is rather dumb in this respect - it does not do any meaningful memory management on its own. Nearly everything is left to the operating system. If the O/S' paging algorithm is able to figure out what the engine might be wanting next, it will leave the right mempages in the filesystem cache. If not, it will throw them away and they will have to be loaded from disk again for the next query.

As the O/S has no knowledge about MyISAM data structures, it will not give any preference to the database's index pages other than due to the likeness of being among the "most recently used" data of a database. Every other "proper" DBMS in the computing world would do own memory management, sometimes even own thread management atop of the operating system as performance optimization measures - thus giving priority to index tables when caching and leaving changed data row pages in memory for a while before writing them to disk ("lazy writer").

So the definite measure for a mainly read MyISAM database would be getting more RAM on the system that the size of the database it is serving - this way you can be sure to cache all the index pages and reduce expensive I/O operations.

And if you are looking for balanced and tunable performance with "real-world" queries (more than a "SELECT * from A LIMIT 10" used for PHP apps), you should definitely take a look around for a different DBMS.

the-wabbit
  • 40,737
  • 13
  • 111
  • 174