3

I was considering moving my static files to multiple origin servers, however I noticed NGINX computes different Etags for identically deployed files on the different servers, which will then mess with downstream caching layers thinking files keep changing which have not.

What I wanted to happen was for the Etag to be dependent on the served file itself, e.g. using a MD5 or other hash of the content (and naturally for NGINX to locally cache that for performance).

Is this possible with the provided static file serving, or is it expected that I solve this in another way? e.g. write my own "file server" app/script that does compute and cache the hash, or ensure all the filesystem meta data (whatever it uses) is always identical somehow?

Using Apache or similar instead is an option.

Fire Lancer
  • 316
  • 1
  • 2
  • 8

1 Answers1

4

as stated in Algorithm behind nginx etag generation, nginx uses the last modified time and the content length of files to generate static file etags.

So if you get different etags for the same static files on different backend servers, chances are that your files' timestamps do not match exactly (e.g. because you checked them out with git at different times).

I don't see any nginx setting to configure how nginx computes the etag, so your only chance with nginx is to ensure that file timestamps are exactly the same, e.g. by using rsync or manually setting timestamps after checking the files out (for a script that sets timestamps to the last timestamp a file has been committed, see https://stackoverflow.com/questions/1964470/whats-the-equivalent-of-use-commit-times-for-git/13284229#13284229)

Stefan
  • 181
  • 4
  • "chances are that your files' timestamps do not match exactly" less of a concern in a Docker deployment, where all the origin servers are likely to use identical Docker images and thus identical file timestamps. – Raedwald Sep 24 '19 at 10:28