You can do this with a simple bash loop and the -O
option in wget.
Something like this:
i=0
while 1
do
# increment our counter
((i++))
# get the file and save it
wget -O file.$i.jpg http://user:pass@10.0.0.50/snapshot.cgi
# presumably you want to wait some time after each retrieval
sleep 30
done
One obvious annoyance is that if you already have a file.1.jpg in the directory and you start this script, it will be overwritten. To deal with that, you first need to find all the existing file.N.jpg files, find the largest value for N, and start at N+1. Here's an incredibly braindead way to do that:
# find the last sequential file:
i=$(ls -t file.*.jpg | head -1 | awk -F. '{ print $2}')
# if there weren't any matching files, force $i to 0
[ $i > 0 ] || i=0
# increment by one to get your starting value for $i
((i++))
# and then start your capture mechanism
while 1
do
# increment our counter
((i++))
# get the file and save it
wget -O file.$i.jpg http://user:pass@10.0.0.50/snapshot.cgi
# presumably you want to wait some time after each retrieval
sleep 30
done
Really I should rewrite this whole thing as a perl one-liner but I'm tired and it's late so I'll be lazy. Anyway that should give you an idea of how to accomplish this with a simple shell script mechanism.