1

I set up countly analytics on the free tier AWS EC2, but stupidly did not set up an elastic IP with it. No, the traffic it too great that I can't even log into the analytics as the CPU is constantly running at 100%.

I am in the process of issuing app updates to change the analytics address to a private domain that forwards to the EC2 instance, so I can change the forwarding in future.

In the mean time, is it possible for me to set up a 2nd instance and forward all the traffic from the current one to the new one?

I found this http://lastzactionhero.wordpress.com/2012/10/26/remote-port-forwarding-from-ec2/ will this work from 1 EC2 instance to another?

Thanks

EDIT --- Countly log

/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:529 throw err; ^ ReferenceError: liveApi is not defined at processUserSession (/home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:203:17) at /home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:32:13 at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/collection.js:1010:5 at Cursor.nextObject (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:653:5) at commandHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:635:14) at null. (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/db.js:1709:18) at g (events.js:175:14) at EventEmitter.emit (events.js:106:17) at Server.Base._callHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/base.js:130:25) at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:522:20

Darren
  • 10,182
  • 20
  • 95
  • 162

2 Answers2

1

You can follow the steps described in the blog post to do the port forwarding. Just make sure not to forward it to localhost :)

Also about 100% CPU, it is probably caused by MongoDB. Did you have a chance to check the process? In case it is mongod, issue mongotop command to see the most time consuming collection accesses. We can go from there.

osoner
  • 2,425
  • 1
  • 15
  • 13
  • Thanks. Looking at the 'top' command. nodes is using the most cpu. Around 20%. mongodb uses around 10%. It's the AWS console monitoring that shows it at 100%. I think I have just reached an upper limit with number of users ~50,000 – Darren Nov 05 '13 at 11:14
  • Well, the top output doesn't seem to be weird at all then. 50K users is not much unless they are all daily active users. Did you check the logs in countly/log? – osoner Nov 06 '13 at 14:45
  • The county-api log file is 412MB! Filled with the error I have posted above. – Darren Nov 06 '13 at 18:51
  • Note that the free-tier t1.Micro instance type has significant variability on CPU and bandwidth allocation, and is actually aggressively curtailed on both when under load. It really is a 'sampler' type, and shouldn't be used as a 'production' platform for any task needing consistent resources. – Eight-Bit Guru Nov 06 '13 at 19:27
  • 1
    @Darren Please upgrade to Countly 13.10 which fixed this issue – osoner Nov 06 '13 at 19:37
  • Really. I must have missed that one. I normally catch all the updates. Sorry for all the noise, I'll update now. – Darren Nov 06 '13 at 19:59
  • While I have your attention, I don't suppose you can help with .htaccess redirect for the api? Using this `RewriteRule ^i(\?.+)$ http://ec2-11-111-11-11.us-west-2.compute.amazonaws.com/i?$1 [R,L,QSA]` it registers the session but doesn't log app version, resolution etc.. – Darren Nov 06 '13 at 22:08
0

Yes. It is possible. I use ngnix with Node JS app. I wanted to redirect traffic from one instance to another. Instance was in different region and not configured in same VPC as mentioned in AWS documentation.

  • Step 1: Go to /etc/ngnix/site-enabled and open default.conf file. Your configuration might be on different file.
  • Step 2: Change proxy_pass to your chosen IP/domain/sub-domain
server
{
  listen 80
  server_name your_domain.com;
  location / {
    ...
    proxy_pass your_ip; // You can put domain, sub-domain with protocol (http/https)
  }
}
  • Step 3: then restart the ngnix
sudo systemctl restart nginx

This can be possible for any external instances and different VPC instances.