We have a java application running on two RHEL5.5 systems. We recently got into a situation where we needed to add more memory for both systems.
Each system was rebooted within 5 minutes of each other. We confirmed that the systems were even in connections through the our load balance device. The free output looked like the following:
hostA:
total used free shared buffers cached
Mem: 3977340 3570688 406652 0 26472 3194816
-/+ buffers/cache: 349400 3627940
Swap: 2097144 0 2097144
hostB:
total used free shared buffers cached
Mem: 3977340 1369456 2607884 0 44200 860736
-/+ buffers/cache: 464520 3512820
Swap: 1048568 0 1048568
While I would expect a difference in the memory currently being used for cache, the extreme difference seems rather disconcerting. Is there any method to see which files currently have blocks in cache or any other way to determine why such a large difference would be evident in two systems that are mirror clones of each other, in a load-balanced setup with relatively close reboot times?
I realize that they stems are not in a bad state, however, I'm being asked to provide a reason or explanation as to why one system is bringing so much into cache and the other is not.
Other VM settings such as swappiness, min_free_kbytes and such are all equal.
Any ideas on what steps I would take to figure this out?