1

I am trying to optimize the loading time of a webpage. Currently, the main http request can be done in under a second, however, the consenquent loading of images seems "to cascade":

http://picpaste.novarata.net/pics/320bd387b4988729ad2dbb10f69a7857.png

I am using nginx to serve my static files, with 3 workers. Is there a possiblity that connections are being held by backlog?

I would assume that all of these images would be loaded at the same time. Files are being served over HTTPS, with keepalive set to 65 seconds.

EDIT: Firefox indeed seems to try to load them at once, however, the connection time is gradually increasing up to more than 2 seconds:

http://picpaste.novarata.net/pics/32c28414501d165df3e04b01e3ac3084.png

Enuy
  • 111
  • 1
  • 3

3 Answers3

1

This behaviour depends on your browser. If you use Chrome, for example, you might find the Timing graph shows the same behaviour, but that things are 'stalled' for a while; this is likely due to [Chrome's] limit of 6 concurrent connections per origin.

You don't say what browser&version you are using.

Here's an extract from https://developer.chrome.com/devtools/docs/network#resource-network-timing

Stalled/Blocking Time the request spent waiting before it could be sent. This time is inclusive of any time spent in proxy negotiation. Additionally, this time will include when the browser is waiting for an already established connection to become available for re-use, obeying Chrome's maximum six TCP connection per origin rule.

Cameron Kerr
  • 4,069
  • 19
  • 25
  • It seems that you are right - Firefox and Chrome are both configured to use at most 6 parallel connections to one server. Is there any way around this, other than serving static/media files from a different server? – Enuy Apr 11 '15 at 09:29
1

It is common to use a content server to serve up static content. This can set appropriate caching headers so that the browser can cache the content. This reduces rendering time on subsequent page loads. Some sites will set very long expiry times and use new file names for updated content.
While this does not help with initial load times, it should increase subsequent load times. It also reduces load on your server which will help on that end as well.

While there is relatively little you can do to reliably increase load time, it is possible to improve rendering times. (There are good reasons that browsers limit the number of connections they open.) Including sizing data for graphical elements allows the page to render before the graphic has been down loaded. Loading script components not required for rendering asynchronously can also help.

BillThor
  • 27,737
  • 3
  • 37
  • 69
0

This can depend on KeepAlive settings. If set - server will attempt to use as few connections as possible (on concert with browser). However while doing so communication becomes "synchronous" so there's a "I ask for A and won't be asking for B until A is here". Also Server should be sending items in the order requests been received and not in the order it was able to process those requests. This is a limitation of HTTP/1.1

What you're asking is HTTP/2 behavior where operations are asynchronous and there's real multiplexing going on.

Droopy4096
  • 680
  • 4
  • 8
  • KeepAlive is enabled, connections should persist for 65 seconds. Also I should probably mention that files are being served over https. – Enuy Apr 11 '15 at 09:27
  • Upon rereading - are you implying that SPDY should solve the problem? – Enuy Apr 11 '15 at 09:34
  • @Enuy in a way - yes. SPDY will help, but SPDY is not exactly HTTP/2 – Droopy4096 Apr 11 '15 at 16:34
  • if serving over https, you really need to unsure you let the browser cache as much as it can, ideally without having to validate for freshness (ie. using `Cache-control: public, max-age=XXX`). I have a couple of similar sites (one is a bit like youtube, but most content is embedded on another intranet site and therefore needs to be served over https to avoid mixed-content problems. – Cameron Kerr Apr 11 '15 at 23:32