0

We've got a system that is running RHEL 4 AS 32 bit (2.6.9-42.ELsmp #1 SMP) with 20GB of RAM and 8 dual core processors that runs a Oracle 10g database.

This is overkill, but I am having trouble explaining why. I am also wondering if there could be memory related problems due to the disproportionate system configuration- could the OS could be spending too much resources scheduling the CPUs, etc.?

Thanks

andyhky
  • 2,732
  • 2
  • 25
  • 26

3 Answers3

1

As far as cores vs memory, you're talking about 20GB on 16 cores; that's (optimistically) 1.25 GB per core, which is not a tonne of memory if you really are pinning all 16 cores. We are definitely running systems with higher memory/core ratios than 20GB/16, and our vendor keeps trying to sell us on boxes that can be configured with memory into the many hundreds of GB across 24 cores.

As far as application performance, 16 cores and 20GB of RAM isn't necessarily overspec'd for Oracle; we definitely run systems that big. That said, 20GB of RAM doesn't do much for you when you're running a 32-bit kernel; rebuilding on RHEL5-64 would be a good choice.

You'll also need to have your DBA look into configuring Oracle to take advantage of the large amount of physical memory - our DBAs have 'disabled AMM and configured huge_pages' on our bigger Oracle servers.

Even if the machine is somewhat overspec'd for your current workload (and only long-term stats monitoring will really bear that out), odds are good you'll grow into it; and during periods of unforecasted heavy load, it's nice to have some overhead to play with instead of dying instantly.

Hope that helps!

Jeff Albert
  • 1,987
  • 9
  • 14
  • "that's (optimistically) 1.25 GB per core" - **very** optimistically - this does not appear to be a NUMA box! But agreed that 64bit is the best approach to get more from it. – symcbean Mar 30 '11 at 17:00
  • AMM has been of hot topic internally. Is there a reason they disabled it? – andyhky Mar 30 '11 at 18:05
  • They cited an Oracle document which stated unequivocally that for systems with >8GB of memory, AMM should be disabled. I presume this is because you apparently can't use huge_pages in conjunction with AMM (for example http://forums.oracle.com/forums/thread.jspa?threadID=1127253) – Jeff Albert Mar 30 '11 at 18:11
0

Unlikely.

What are you using the server for? Only a bit of profiling will tell you whether it's really over-specced or not.

gerryk
  • 181
  • 1
0

I don't think you will likely have resource issues related to CPU scheduling or memory mapping. These won't happen unless needed. When they do happen, they are relatively low overhead compared to the process that is scheduled.

What I would look at is:

  • load average. This tells you how many cores are active. It is nice to have load average below the number of CPUs. But if load average is well below the number of CPUs then the server is probably has too many cores.
  • memory utilization excluding buffers. You will want some more memory more that that level, but probably not more than 2 or 3 times.

If you are running virtual hosts on the the server you may want to pin the CPU(s) for the host. You may want to do the same for high CPU single threaded processes. Spread the load across CPUs if you do so.

BillThor
  • 27,737
  • 3
  • 37
  • 69
  • load average: 1.34, 0.74, 0.54 – andyhky Mar 30 '11 at 15:01
  • @andyh_ky: You would want to monitor this for a few days. If you have monthly or annual processes, it helps to monitor while they run. A tool like munin does this nicely. However, it looks like 1 or 2 cores might be sufficient. – BillThor Mar 30 '11 at 15:04