DB2 should make fairly efficient use of memory on the server if you leave its self-tuning memory management feature (STMM) enabled, provided that processes outside of DB2 are allocating and releasing their memory in a well-behaved manner. STMM adjusts the size of several high-impact memory buffers and heaps to accommodate an ever-changing database workload. This feature has been enabled by default for the last couple major releases of DB2, and generally arrives at memory settings that are nearly identical to a database hand-tuned by a DB2 expert. One caveat of STMM is that it is prohibited by design from making wild swings in its memory sizes, which means that STMM may need to make several incremental adjustments to accommodate a wild spike or drop in database utilization. DB2 offers a wealth of built-in monitoring features to help you track the efficiency and growth patterns of internal memory structures, so you can quickly obtain an idea of what "normal" looks like for a particular workload.
On the CPU side, one of the biggest risks to good performance and scalability is scanning, which burns CPU cycles even when all of the required data pages are already cached in buffer pool memory. Understanding how indexes work, and when they'll be used or ignored for specific SQL statements is particularly important as tables grow and queries broaden. Sometimes, unacceptable scanning occurs not because of poor indexing or lousy join predicates, but because the table's cardinality and distribution statistics collected by RUNSTATS are out of date, which misleads DB2's cost-based query optimizer into underestimating the consequences of using a partial or full scan. The db2expln utility will show the access plan for a proposed query by taking the current statistics into account. At runtime, it's also possible to monitor the number of rows read (scanned) vs. rows actually selected, which can give you an idea of how much churn is occurring to arrive at the result sets for your workload.