0

I'm in a pickle. I had thought my server was in Canada, but turns out it is in France. I don't have the time for a server migration right now (it's the end goal as 75% if the visitors are USA).

Using the website speed test on pingdom, the first byte wait is around 140ms when I test it from Sweden. When I test from California it's around 680ms. I need to try and reduce this as Google say it is slow to crawl the site and overall most visitors are from the US so I need to improve their experience.

Cloudflare is caching static assets so while the latency is also in the 600+ region for css, js and images, they fall right down once Cloudflare picks the files up. My focus is now in php files.

I am unable to use nginx as I cannot do a 100% html cache via a proxy. I can however use redis to cache 99% of the HTML. I have upgraded to php7 and Apache now works with it via fast CGI. I have made the site as quick as I possibly can with heavy memory caching and getting rid of all unused modules etc. This site is quite quick with tests from pingdom in California taking 2-3 seconds to fully render a page that does not hit the HTML cache. However if I can get rid of some of that 600ms delay I would be breezing.

Outside of owning a server in EU and USA and using a load balancer, what can I do to speed up the experience for US users and get rid of some of that lstency?

I am running Centos 7 with Apache 2.4 and php7 (fast CGI) over a 100mb connection on server in France. The server is dedicated with 16gb ram and a 4 core xeon with 8 threads @2.4ghz.

Dan Hastings
  • 706
  • 1
  • 13
  • 24
  • Questions should demonstrate reasonable business information technology management practices. Questions that relate to unsupported hardware or software platforms or unmaintained environments may not be suitable for Server Fault - see the help center. – TomTom Oct 12 '17 at 07:21
  • So, you where not smart enough to set your system up properly, you are not smart enough to fix it, and instead now you ask us to bend the rules of physics and possibly fix your internet provide routing? You do win a price. – TomTom Oct 12 '17 at 07:23
  • The only way to get rid of network latency is by getting rid of it - moving your server. This is why EVERY larger setup in the world that cares about latency has servers in multiple locations. And uses geo routing, which you won't easily get at your scale outside of moving to a cloud provider. – TomTom Oct 12 '17 at 07:24

1 Answers1

2

The response times you mention sound excessive. I would first verify how quickly the responses can be produced when networking is removed from the equation.

Try to run a command on the server itself to download the page and see how much time that takes. If you want to time a single resource you can do that easily using wget or curl. If you want something that can identify all the resources needed to render a page and download them, you need to look into other tools.

In order to measure what kind of latency you have try to use a single ICMP echo-request from locations around the world to your server to see how much time is added by the network alone. There are plenty of sites that can do such a test for you.

Even if the content cannot be cached there is potential benefit from putting a proxy in front of the server. The proxy can be located close to the users and the proxy can establish an https connection to the backend server ahead of time such that the users only have to wait for a single roundtrip on the transatlantic link rather than the two or more needed if the connection had to be set up when the user starts loading the page.

Moreover when you control the server at each end of the connection you will be in a better position to measure roundtrip, jitter, and packet loss on the connection across the Atlantic. Being able to measure it is the first step towards being able to improve it.

Finally find out how many roundtrips the structure of your page adds to downloading all of the resources. The proxy can get you down to one roundtrip per resource, but the structure of the page decides how many can be downloaded in parallel. For example if the page references a javascript file and that script then downloads an image when run, you will need at least three roundtrips because the browser can't start downloading the next until it has the previous.

kasperd
  • 30,455
  • 17
  • 76
  • 124
  • So i tried using ApacheBench and the average request is 208ms. I set it to target a html file with the word "test" inside (tiny file). I have another domain on this server so i tried to target this with the same test and it was a average of 83ms. It looks like the vhost is the issue. Interestingly when i try to use apachebench at the https url it gives me "SSL Handshake failed". Cloudflare might be doing something here. – Dan Hastings Oct 12 '17 at 12:07