I've recently converted some java apps to run with linux manually-configured hugepages, as described here. I point out "manually-configured" because they are not transparent hugepages, which gave us some performance issues.
So now, I've got about 10 tomcats running on a system and I am interested in knowing how much memory each one is using.
I can get summary information out of /proc/meminfo
as described in Linux Huge Pages Usage Accounting.
But I can't find any tools that tell me about the actual per-process hugepage usage.
I poked around in /proc/pid/numa_stat
and found some interesting information that led me to this grossity:
function pshugepage () {
HUGEPAGECOUNT=0
for num in `grep 'anon_hugepage.*dirty=' /proc/$@/numa_maps | awk '{print $6}' | sed 's/dirty=//'` ; do
HUGEPAGECOUNT=$((HUGEPAGECOUNT+num))
done
echo process $@ using $HUGEPAGECOUNT huge pages
}
or this, in perl:
sub counthugepages {
my $pid=$_[0];
open (NUMAMAPS, "/proc/$pid/numa_maps") || die "can't open numa_maps";
my $HUGEPAGECOUNT=0;
while (my $line=<NUMAMAPS>) {
next unless ($line =~ m{ huge }) ;
next unless ($line =~ m{dirty=});
chomp $line;
$line =~ s{.*dirty=}{};
$line =~ s{\s.*$}{};
$HUGEPAGECOUNT+=$line;
}
close NUMAMAPS;
# we want megabytes out, but we counted 2-megabyte hugepages
return ($HUGEPAGECOUNT*2);
}
The numbers it gives me are plausible, but i'm far from confident this method is correct.
Environment is a quad-CPU dell, 64GB ram, RHEL6.3, oracle jdk 1.7.x (current as of 20130728)