My application sits behind a load balancer, and every once in a while I like to do a status check on each machine to get an idea of the time it takes to return an index.html document on each machine.
The script looks like this:
for host in 192.168.0.7 192.168.0.8 192.168.0.9; do
result=$( ( time wget -q --header="Host: domain.tomonitor.com" http://$host/ ) 2>&1 | grep real | awk '{print $2}' )
date=$(date)
echo "$date, $host, $result"
done
Since the application thinks it's on domain.tomonitor.com
, I set that manually in the wget request header. It grep
s for the "real" time and awk
s out the time alone, dumping that into a $result variable. Empirically, it seems to work pretty well as a basic manual check -- responses typically take 2-3 seconds across my various servers, unless there's some unbalanced connections going on. I run it directly from my Mac OS X laptop against our private network.
The other day I wondered if I could log the results over time using a cron. I was amazed to find it had subsecond responses, for example .003 seconds. Tried mounting the script results to my Desktop with an OS X desktop widget called Geektool and saw similar, sub-second times reported.
I suspect the difference is due to some user error -- some reason why the time wget
command I'm running won't work. Can anyone tell me why the time it takes to run this script differs so much between user (me running by hand) and system (cronjob or Geektool) and how I might correct the discrepancy?