0

I have an edgerouter x sfp as the main router to the internet. To this router a server is connected with a reverse proxy docker running called swag eg. letsencrypt which I use to access nextcloud and several ngnix dockers from the internet.

This works almost great. So navigating those sites and nextcloud works but when I start downloading large files the problem starts.

When I download a 20GB file from nextcloud via the webbrowser it fails due to "network failure" according to chrome. I can restart the download and can finish the download after several restarts (from the kontext menu in the download menu). The same happens when I download the same file from an nginx site so it is not directly nextcloud related.

However I can successfully download the file when I connect through OpenVPN to my Network and download the file through the internal LAN IP. So using the direct IP to the server over OpenVPN does not lead to a "network failure" message in chrome.

So can someone help me figure out where the problem is:

  1. Is the edgerouter x sfp configured wrong (I just added the port forwarding bit)
  2. Is the reverse proxy the problem (here I took the suggested proxy confs from the docker itself)
  3. or something else?

EDIT 1: nginx conf file:

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name shop.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Username and Password Required";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_grafana nginx;
        proxy_pass http://$upstream_grafana;
    }
}

nextcloud conf file:

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name nextcloud.*;
    
    include /config/nginx/ssl.conf;
    
    add_header Strict-Transport-Security "max-age=15552000; includeSubDomains";
            

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_nextcloud nextcloud;
        proxy_max_temp_file_size 2048m;
        
        proxy_pass https://$upstream_nextcloud;         
    }
    
    
    location ^~ /.well-known {
        # The following 6 rules are borrowed from `.htaccess`

        location = /.well-known/carddav     { return 301 /remote.php/dav/; }
        location = /.well-known/caldav      { return 301 /remote.php/dav/; }
        # Anything else is dynamically handled by Nextcloud
        location ^~ /.well-known            { return 301 /index.php$uri; }

        try_files $uri $uri/ =404;
    }       
}
Andreas
  • 1
  • 3

1 Answers1

0

OK I figured it out by mysel.f

Both Dockers are based on a nginx server and there is a property called proxy_max_temp_file_size that sets the amount of temp file size while using the docker with a reverse porxy. Now I set it to a very low value (100m) so the timeout should never be called and therefore now network failure will happen. You could also disable it by using proxy_max_temp_file_size=0 but I'm not sure what is better so I currently leave it at 100m.

Andreas
  • 1
  • 3