2

I've got an nginx 1.10.3 server running on Debian Stretch. One of the sites it serves is a WebDAV share that is read and written by a desktop app. The app performs the following steps when a file named myfile is saved to the WebDAV server:

  1. DELETE /myfile.tmp
  2. PUT /myfile.tmp, body contains new file data
  3. DELETE /myfile
  4. MOVE /myfile.tmp, Destination: http://webdav.example.com/myfile
  5. GET /myfile

The client app compares the response from step 5 to the data sent in step 2, and if the file data does not match an error is raised. These steps happen extremely rapidly on our particular network (the server and client are geographically close, connected to the same Ethernet switch) -- my testing with tcpdump suggests the entire conversation finishes within 45 ms.

The problem is, the data returned in step 5 doesn't immediately match what the client sent in step 2. The data being returned is the previous version of myfile, before the DELETE/MOVE steps replaced it. If I were to go back and repeat step 5 manually a moment later, the file data would be the new version as expected.

I know the client waits for each response to arrive before issuing a subsequent request. My best guess is that different requests are hitting different nginx workers/threads, or maybe there is some kind of cache invalidation or flush that isn't happening fast enough.

How can I fix this behavior without modifying the client app or artificially slowing down the requests?

Full nginx.conf and site config follows:

pid /run/nginx.pid;
user www-data;
worker_processes auto;
worker_rlimit_nofile 20000;

events {
    multi_accept on;
    use epoll;
    worker_connections 4000;
}

http {
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log warn;

    sendfile on;
    server_tokens off;
    tcp_nodelay on;
    tcp_nopush on;
    keepalive_requests 100000;
    keepalive_timeout 65;
    client_body_timeout 10;
    send_timeout 10;
    reset_timedout_connection on;
    types_hash_max_size 2048;

    open_file_cache max=200000 inactive=20s;
    open_file_cache_errors on;
    open_file_cache_min_uses 2;
    open_file_cache_valid 30s;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    gzip on;
    gzip_buffers 16 8k;
    gzip_comp_level 6;
    gzip_disable msie6;
    gzip_http_version 1.1;
    gzip_min_length 10240;
    gzip_proxied any;
    gzip_vary on;
    gzip_types
        application/atom+xml
        application/javascript
        application/json
        application/ld+json
        application/manifest+json
        application/rss+xml
        application/vnd.geo+json
        application/vnd.ms-fontobject
        application/x-font-ttf
        application/x-javascript
        application/x-web-app-manifest+json
        application/xhtml+xml
        application/xml
        application/xml+rss
        font/opentype
        image/bmp
        image/svg+xml
        image/x-icon
        text/cache-manifest
        text/css
        text/javascript
        text/plain
        text/vcard
        text/vnd.rim.location.xloc
        text/vtt
        text/x-component
        text/x-cross-domain-policy
        text/xml;

    server {
        listen 80;
        listen [::]:80;
        server_name webdav.example.com;

        root /var/www/webdav.example.com;

        autoindex on;
        autoindex_exact_size off;
        autoindex_localtime on;
        dav_access user:rw group:r all:r;
        dav_methods DELETE MOVE PUT;
        create_full_put_path on;
    }
}

EDIT: An interesting observation I discovered. If I reload nginx (sudo service nginx reload), the very first attempt to save the file succeeds. But if I try to save it a subsequent time, the same error happens.

smitelli
  • 1,214
  • 1
  • 10
  • 16

1 Answers1

1

Turns out it was the open_file_cache stuff. The docs make it sound like it's only caching file metadata a la the stat cache, but in this case it was actually making some of the responses stale.

open_file_cache off; inside the server { ... } block was all it took, and now it's working well.

smitelli
  • 1,214
  • 1
  • 10
  • 16
  • Men you saved me hours. When migrated from Ubuntu 14 to Ubuntu 18 I had a problem with KeePass, the file was stored fine but keepass was complaining that file contents has changed make sure not corrupted etc. With your answer implemented the problem is gone. Thanks! – Pawel Cioch Aug 21 '20 at 20:00