1

I need to convert my web application from MySQL to run on DB2. I need to know in advance whether I will need a high spec server to ensure the web application performs at the same speed as it does now on MySQL.

The application is a very database intensive data analysis application with a browser interface. It is only using about 20% of the available CPU power, and 30% of the memory on the server now. Would it take more CPU or memory if converted to DB2? Would I need a high spec server?

I am just looking for any general feel of whether a server upgrade would be required for a MySQL to DB2 conversion if everything else remains equal (database size, traffic, etc).

dabayl
  • 302
  • 5
  • 12

3 Answers3

2

are you using isam or innodb?

typically, DB2 requires less system resources than the current versions of mysql. however, if it's performance you're looking for and the size of the database(s) aren't too large, you may actually want to look into switching to postgresql instead since it's considerably less "featureful" (read: bloated) than mysql. sometimes you NEED mysql but if you don't, it's generally a better idea to go with pgsql.

brent saner
  • 430
  • 2
  • 7
  • Thanks Brent. Unfortunately it's a client directive to move to DB2 so I don't have a choice - I was just concerned if it would take more resources to run, but so far it sounds like they are very similar. – dabayl Dec 05 '11 at 15:45
1

I don't think it will make a big difference if you run mysql or DB2. However, you will not be sure about performance unless you really try it yourself. I recommend you try it on a testing environment similar to your current system.

Khaled
  • 36,533
  • 8
  • 72
  • 99
0

DB2 should make fairly efficient use of memory on the server if you leave its self-tuning memory management feature (STMM) enabled, provided that processes outside of DB2 are allocating and releasing their memory in a well-behaved manner. STMM adjusts the size of several high-impact memory buffers and heaps to accommodate an ever-changing database workload. This feature has been enabled by default for the last couple major releases of DB2, and generally arrives at memory settings that are nearly identical to a database hand-tuned by a DB2 expert. One caveat of STMM is that it is prohibited by design from making wild swings in its memory sizes, which means that STMM may need to make several incremental adjustments to accommodate a wild spike or drop in database utilization. DB2 offers a wealth of built-in monitoring features to help you track the efficiency and growth patterns of internal memory structures, so you can quickly obtain an idea of what "normal" looks like for a particular workload.

On the CPU side, one of the biggest risks to good performance and scalability is scanning, which burns CPU cycles even when all of the required data pages are already cached in buffer pool memory. Understanding how indexes work, and when they'll be used or ignored for specific SQL statements is particularly important as tables grow and queries broaden. Sometimes, unacceptable scanning occurs not because of poor indexing or lousy join predicates, but because the table's cardinality and distribution statistics collected by RUNSTATS are out of date, which misleads DB2's cost-based query optimizer into underestimating the consequences of using a partial or full scan. The db2expln utility will show the access plan for a proposed query by taking the current statistics into account. At runtime, it's also possible to monitor the number of rows read (scanned) vs. rows actually selected, which can give you an idea of how much churn is occurring to arrive at the result sets for your workload.

Fred Sobotka
  • 143
  • 1
  • 1
  • 7