0

Server Question: I have been working on to increase speed of website of https://www.winni.in and I have came across a very strange phenomena. If anyone could, please explain about this.

I was comparing load time of Winni and Snapdeal, and found out that where Winni takes at an average 450ms in connect time while snapdeal takes at an average just 30ms in connect time. Although both websites are hosted in Singapore region of AWS. I assumed this Connect time is because of latency, as if I access Winni from India or Australia then it falls at around 150ms or less but snapdeal(www.snapdeal.com) is consistently below 30ms no matter from which geographic location you are accessing it.

I am attaching, screenshots taken from pingdom for both Winni and Snapdeal showing Connect time when tested from New York. Is there anything in AWS that we are missing which could reduce Connect time or is it because of some server configuration issue.

Winni Load Time Snapdeal Load Time

Winni's Server Stack Is:

  1. EC2 - Singapore Region SSD Hard Drive 2 Core CPU
  2. Nginx
  3. Tomcat
Abhinav
  • 743
  • 2
  • 9
  • 20

1 Answers1

1

Latency can play a significant part in website performance, but it's not the whole reason. Page generation time is another significant factor. HTTPS requires a few round trips to set up the connection, so latency can play a significant part. HTTP/2 gets around this by setting up the connection once per server and transferring files in parallel.

Another solution or workaround is to use a content distribution network, which is what Snapdeal does - they use Akami, one of the oldest and best known CDNs. It's probably also one of the most expensive, I guess. There are dozens of CDNs available - Max, CloudFront, etc.

You can use CloudFlare, which is a free CDN, and works well with AWS. Typically pages still get retrieved from your server, so latency for the first page won't drop, but latency to retrieve static resources will. You can have CF cache pages, but if users can log in it's not practical - you don't want a non-logged-in users to see a page cached from a logged in user's private session. Another advantage of using CloudFlare is they do HTTP/2 for you, automatically, if you let it.

To hit 30ms regardless of where you access from they must be caching their pages on Akami. Akami is probably a little smarter than CloudFlare free about page caching, for example not caching them if there's a particular cookie being sent, or it's a POST. They could alternately have application servers at many Akami edge locations.

I've written a bit of a tutorial on AWS/Nginx performance here. I'll publish the final part, on the CloudFlare CDN, when I have a bit more time, but it's not that difficult to set up.

Tim
  • 31,888
  • 7
  • 52
  • 78
  • Thanks for such insights, I went ahead with Cloudfront's full sight delivery and started caching few important pages and those which are not cached for them also latency got reduced to half by routing through cloudront's edge location. – Abhinav May 12 '16 at 09:21