1

I have setup a standard web application server in AWS using a bitnami Apache-Tomcat AMI. The instance is running in a public subnet with all Outbound traffic open but only allowing traffic inbound through port 22 (only from my IP) and ports 80 & 443 from the Load Balancer.

I have been recently hit with a massive data charge because somehow the instance has transferred (outbound) in excess of 14TB over the last couple of weeks. I shut the server down 2 days ago and just fired it up and am looking around for any logs of any description that might show me what was happening. (The basic AWS reporting is useless). I have only just installed IPTraf so I can at least monitor network traffic (all is quiet) and have also setup some Cloud Watch alarms to make sure it doesn't happen again.

Any ideas where I might be able to look for evidence of what was causing the massive transfer of data outbound and where to?

Cheers

Pelly
  • 23
  • 5
  • Do you need to allow the instance to initiate outbound connections at all?! – Michael Hampton Jan 07 '15 at 03:09
  • I can definitely review that and in most cases probably no. I had it turned on so I could at least install stuff, talk to the internet etc.. It does need to speak to a MySQL RDS instance however I can specifically just allow that. I am just hoping there is someway I can find out what happened in the first instance... – Pelly Jan 07 '15 at 03:26
  • 1
    FYI, AWS is likely to forgive the charges. Contact support. – ceejayoz Jan 07 '15 at 03:30
  • Yeah, already have and they are likely to based on my conversations with them. However they want to know if I have corrected the issue, and short of terminating the instance I would like to try and find out what happened and mitigate it. – Pelly Jan 07 '15 at 03:31
  • You can find evidence of traffic: - in iptables stats (if iptables used) - or using tcpdump or tshark. – tonioc Jan 07 '15 at 10:23

1 Answers1

1

Well the high spike in outbound data happened again early this morning. I used tshark (Thanks @tonioc) to see that data was being sent to multiple IP's around the world and more specifically China.. :-/ Anyway i was creating some dumps from tshark and storing them in the /tmp folder and realised that there was a file called fake.cfg sitting there. I straight away thought this as suspicious so did some research and found that my server had been hacked using vulnerabilities in the host-manager which comes with the tomcat-apache bitnami instance I was running. Most likely the password was guess and they installed a malicious app. There was also a "hosts-manager" app in my webapps folder which shouldn't have been there and within contained an index.jsp file that had a whole range of malicious scripts.

Anyway I have cleared out all of those scripts and completely removed access to host-manager and any other bitnami pages from my webapps folder and now only my webapp can be access. I have also ensured all default passwords have been changed and monitoring on my instances for spikes in outbound data have been put in place.

Some articles on the issues:

http://www.coderanch.com/t/628222/Tomcat/fake-cfg-tmp-directory-lot https://stackoverflow.com/questions/20017515/aws-network-traffic-high-due-to-folder-29881-and-fake-cfg http://blog.rimuhosting.com/2013/08/09/old-tomcat-5-5-installs-being-exploited/

I think I am all good for now.

Cheers

Pelly
  • 23
  • 5