8

I have an upstream server handling the login of our website. On a successful login I want to redirect the user to the secure part of the site. On a failure to login I want to redirect the user to the login form.

The upstream server returns a 200 OK on a successful login and a 401 Unauthorized on a failed login.

This is the relevant part of my configuration:

{
    error_page 401 = @error401
    location @error401 {
        return 302 /login.html # this page holds the login form
    }

    location = /login { # this is the POST target of the login form
        proxy_pass http://localhost:8080;
        proxy_intercept_errors on;
        return 302 /secure/; # without this line, failures work. With it failed logins (401 upstream response) still get 302 redirected
    }
}

This setup works when succeeding to login. The client is redirect with a 302. This does not work when failing to login. The upstream server returns 401 and I expected that the error_page would then kick in. But I still get the 302. If I remove the return 302 /secure/ line the redirect to the login page works. So it seems I can have either one but not both.

Bonus question; I doubt the way I handle the error_page with that named location is The Way. Am I correct in doing it like this?

edit: Turns out having a return in the location block makes Nginx not use the proxy_pass at all. So it makes sense the error page is not hit. The problem on how to do this, however, remains.

harm
  • 181
  • 1
  • 1
  • 11

5 Answers5

5

The exact solution to the question is to use the Lua capabilities of Nginx.

On Ubuntu 16.04 you can install a version of Nginx supporting Lua with:

$ apt install nginx-extra

On other systems it might be different. You can also opt for installing OpenResty.

With Lua you have full access to the upstream response. Note that you appear to have access to the upstream status via the $upstream_status variable. And in a way you do but due to the way 'if' statements are evaluated in Nginx you can not use $upstream_status in the 'if' statement conditional.

With Lua your configuration will then look like:

    location = /login { # the POST target of your login form
           rewrite_by_lua_block {
                    ngx.req.read_body()
                    local res = ngx.location.capture("/login_proxy", {method = ngx.HTTP_POST})
                    if res.status == 200 then
                            ngx.header.Set_Cookie = res.header["Set-Cookie"] # pass along the cookie set by the backend
                            return ngx.redirect("/shows/")
                    else
                            return ngx.redirect("/login.html")
                    end
            }
    }

    location = /login_proxy {
            internal;
            proxy_pass http://localhost:8080/login;
    }

Pretty straight forward. The only two quirks are the reading of the request body in order to pass along the POST parameters and the setting of the cookie in the final response to the client.


What I actually ended up doing, after a lot of prodding from the community, is that I handled the upstream stream responses on the client side. This left the upstream server unchanged and my Nginx configuration simple:

location = /login {
       proxy_pass http://localhost:8080;
}

The client initialing the request handles the upstream response:

  <body>
    <form id='login-form' action="/login" method="post">
      <input type="text" name="username">
      <input type="text" name="password">
      <input type="submit">
    </form>
    <script type='text/javascript'>
      const form = document.getElementById('login-form');
      form.addEventListener('submit', (event) => {
        const data = new FormData(form);
        const postRepresentation = new URLSearchParams(); // My upstream auth server can't handle the "multipart/form-data" FormData generates.
        postRepresentation.set('username', data.get('username'));
        postRepresentation.set('password', data.get('password'));

        event.preventDefault();

        fetch('/login', {
          method: 'POST',
          body: postRepresentation,
        })
          .then((response) => {
            if (response.status === 200) {
              console.log('200');
            } else if (response.status === 401) {
              console.log('401');
            } else {
              console.log('we got an unexpected return');
              console.log(response);
            }
          });
      });
    </script>
  </body>

The solution above achieves my goal of having a clear separation of concerns. The authentication server is oblivious to the use cases the callers want to support.

harm
  • 181
  • 1
  • 1
  • 11
1

While I completely agree with @michael-hampton, i.e., that this issue should not be handled by nginx, have you tried moving error_page into the location block:

{
    location @error401 {
        return 302 /login.html # this page holds the login form
    }

    location = /login { # this is the POST target of the login form
        proxy_pass http://localhost:8080;
        proxy_intercept_errors on;
        error_page 401 = @error401;
        return 302 /secure/; # without this line, failures work. With it failed logins (401 upstream response) still get 302 redirected
    }
}
2ps
  • 1,106
  • 8
  • 14
0

You need to set proxy_intercept_errors on on server level in order to handle proxy errors.

server {
    server_name mydomain.com;
    proxy_intercept_errors on;

    location @error401 {
        return 302 /login.html # this page holds the login form
    }
    location / {
        proxy_pass http://localhost:8080;
        error_page 401 = @error401;
    }
}
ibrahim
  • 431
  • 1
  • 7
  • 20
0

I'm not entirely sure if the following works, and I share the view of Michael, but you could try to use the HTTP auth request module, maybe something along these lines:

location /private/ {
    auth_request /auth;
    ... 
}

location = /auth {
    proxy_pass ...
    proxy_pass_request_body off;
    proxy_set_header Content-Length "";
    # This is only needed if your auth server uses this information,
    # for example if you're hosting different content which is 
    # only accessible to specific groups, not all of them, so your
    # auth server checks "who" and "what"
    proxy_set_header X-Original-URI $request_uri;
    error_page 401 = @error401;
}

location @error401 {
    return 302 /login.html
}

You would use this config snippet not on the server doing the actual auth, but on the server which hosts the protected content. I couldn't test this right now, but maybe this is still of some help to you.

Regarding your bonus question: Yes, AFAIK, this is the way this should be handled.

gxx
  • 5,591
  • 2
  • 22
  • 42
  • This is more or less what I arrived at. But the issue is that I want to redirect a 200 OK response from the upstream server. `error_page` won't let me do that. And adding `return` in the `location` block prevents the other `error_page` directive from working. – harm Dec 23 '16 at 07:38
  • @harm Why do you want to redirect if getting a `200 OK`? Why isn't it possible to let the user access the content if getting a `200 OK`, and just redirect if getting a `401`? – gxx Dec 23 '16 at 12:05
  • I'm doing that *if* a user accesses `/secure`, with that `auth_request` and that works absolutely great. But I'm looking for a solution when a user *logs in*. The user POSTs a form to `/login`. Either he supplied a correct username/password combo or he didn't. If he did redirect to `/secure` (which will make an `auth_request`), if he didn't redirect back to the login form. – harm Dec 23 '16 at 12:32
  • I'm afraid I don't get it, but maybe [this](https://developers.shopware.com/blog/2015/03/02/sso-with-nginx-authrequest-module/) is of any help. My proposal is better described over there. – gxx Dec 23 '16 at 12:53
  • Hahaha! That is *exactly* the template I used when setting this up. That article was/is tremendously helpful! Many thanks. You've used `login.example.com` to supply the auth cookie to the user and you have probably all sorts of behavior on that domain (deal with registration and login flow). I want to encode that in Nginx so I don't have to write a separate app for that. But it seems I'm going to have to. – harm Dec 23 '16 at 13:12
  • On a side note, you can deal with this situation on frontend, e.g. do the auth in AJAX request and given its result make a client-side redirect. – Peter Zhabin Dec 23 '16 at 13:21
  • @harm Not sure if you're joking, sarcastic or serious. Anyway, good luck. – gxx Dec 23 '16 at 13:53
  • Ow, sorry. I get that a lot. I was absolutely serious. Your article actually gave me the idea. – harm Dec 23 '16 at 14:40
  • @harm Alright. I'm not sure if it has to be a separate app. I mean, you've already "something" in place which is able to do the auth, right? If so, just don't use `proxy_pass ...`, but your location which responds either with `200` or `401`. Doesn't this work? – gxx Dec 23 '16 at 14:57
  • My front end is a dumb HTML form. But that piece is very much under my control... So I think I could use some javascript/ajax to handle the 200/401 responses. This puts me on the right track! – harm Dec 23 '16 at 15:19
  • @harm Well, still it emits `200` or `401`, right? To which service does the "dump frontend" speak? To some auth service, I guess? If not: Which service is actually checking credentials, etc.? – gxx Dec 23 '16 at 15:21
  • The 'dump frontend' talks to an auth service. That auth service supplies three things: login (exchange username + password for token), registration and authentication (supplied token is checked). The authentication part works great. The login/registration yield 200s and 401s which are pretty useless to my frontend. – harm Dec 23 '16 at 15:31
  • But I can make my frontend smarter with some javascript. (annoyingly I can't edit my comments anymore because of the bounty I guess) – harm Dec 23 '16 at 15:32
0

Behavior you see is expected, as return will replace whatever your rewrites has generated.

While I completely agree with Michael Hampton, if you're really out of other options you could try something along the following. Please, bear in mind that this is a dirty hack, you really need to consider your architecture in the first place:

upstream backend {
    server http://localhost:8080;
}

server {
     location / {
         proxy_pass http://backend;
         if ($upstream_status = 401) {
             return 302 /login.html;
         }
         if ($upstream_status = 200) {
             return 302 /secure/;
         }
     }
}
Peter Zhabin
  • 2,696
  • 9
  • 10
  • `$upstream_status` seems to be ignored entirely. With this configuration the response is just the upstream response. – harm Dec 23 '16 at 10:57
  • The fact that variable is ignored I've noticed before BTW. It's rather odd. When I add it as a header (I have no idea how to debug this in any other sane way) like so: `add_header X-Debug $upstream_status always;` I *do* see the correct status outputted. Perhaps the test in the if statement should be different? – harm Dec 23 '16 at 11:01
  • And enough people said this was a bad idea. I can't change the upstream server. Should I then put something in between the current upstream and nginx? Something which translates `200` to `302 /secure/` and `401` to `302 /login.html`? – harm Dec 23 '16 at 11:06
  • I believe this happens because if's actually get evaluated before the request is made, don't have non-production nginx to test that at the moment. But you can then try Lua module for nginx, with that you can do whatever you want with responses.. – Peter Zhabin Dec 23 '16 at 11:54
  • That basically means going for OpenResty right? Compiling Lua into Nginx seems against the grain: https://github.com/openresty/lua-nginx-module#installation – harm Dec 23 '16 at 11:58
  • 1
    Yes, building this from scratch is somewhat complicated :) – Peter Zhabin Dec 23 '16 at 12:21
  • @harm Not sure which OS you're using, but in case it's Debian / Ubuntu: The lua module is included in `nginx-extras`. – gxx Dec 23 '16 at 13:55
  • I'm free to choose the OS at this point. And it just became Debian / Ubuntu. :) – harm Dec 23 '16 at 14:39