In terms of the intended purpose of the HTTP error codes, I would definitely go with 403 Forbidden
, because the page does exist (404 is out), but the user is forbidden to access it for now (and this restriction is not due to a resource conflict, like concurrent modification, but due to the user's account status, i.e. 409 is out as well in my opinion). Another sensible option based on it's intended purpose could have been 401, but as nalply already noted in his comment, this code triggers some, if not all, browsers to display a login dialog, as it implies that using the standard web-authentication mechanism can resolve the issue. So, it would definitely not be an option for you here.
Two things seem a little "misfitting" in the description of 403, so let me address them:
- Authorization will not help ...: This only talks about the authorization mechanism inside the HTTP protocol and is meant to distinguish 403 from 401. This statement does not apply to any form of custom authorization or session state management.
- ... the request SHOULD NOT be repeated ...: A request must always be seen in the session context, so if the session context of the user changes (he unlocks a feature) and then he retries accessing the same resource, that is a different request, i.e. there is no violation of this suggestion.
Of course, you could also define your own error code, but since it probably won't be reserved in any official way, there is no guarantee that some browser manufacturer isn't going to intentionally or accidentally use exactly that code to trigger a specific (debugging) action. It's unlikely, but not disallowed.
418 could be OK, too, though. :)
Of course, if you would like to specifically obscure the potential availability of features, you could also decide to use 404 as that is the only way to not give a nosy user any hints.
Now, to your caching issue:
Neither one of these status codes (403, 404, 409, 418) should trigger the browser to cache the page against your will more than any other. The problem is that many browser simply try to cache everything like crazy to be extra snappy. Opera is the worst here in my opinion. I've been pulling my hair out many times over these things. It SHOULD be possible to work it all out with the correct header settings, but I've had situations where either the browser or the server or some intermediate proxy decided to ignore them and break my page anyways.
The only sure-fire way that I have found so far that absolutely positively guarantees a reload is to add a dummy request parameter like /fields?t=29873, where 29873 is a number that is unique for every request you make within any possibly relevant time scales. On the server, of course, you can then simply ignore this parameter. Note that it is not enough to simply start at 1 when your user first opens your page and then count up for following requests, as browsers might keep the cache around across page-reloads.
I do my web-development in Java (both server and client-side using GWT) and I use this code to generate the dummy "numbers":
private static final char[] base64chars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_.".toCharArray();
private static int tagIndex = 0;
/**
* Generates a unique 6-character tag string that is guaranteed to not repeat
* for about 400 days, if this function is, on average, not called more often
* than twice every millisecond.
*
* @return the tag string
*/
public static String nowTag() {
int tag = (int) ((System.currentTimeMillis() >>> 5)); // adjust
char[] result = new char[6];
result[5] = base64chars[(tagIndex++) & 63];
result[4] = base64chars[tag & 63];
tag >>>= 6;
result[3] = base64chars[tag & 63];
tag >>>= 6;
result[2] = base64chars[tag & 63];
tag >>>= 6;
result[1] = base64chars[tag & 63];
tag >>>= 6;
result[0] = base64chars[tag & 63];
return new String(result);
}
It uses the system's clock in combination with a counter to be able to provide up to about two guaranteed unique values every ms. You might not need this speed, so you can feel free to change the >>> 5
that I marked with "adjust" to fit your needs. If you increase it by 1, your rate goes down by a factor of two and your uniqueness time-span doubles. So, for example, if you put >>> 8
instead, you can generate about 1 value every 4 ms and the values should not repeat for 3200 days. Of course, this guarantee that the values will not repeat will go away if the user messes with the system clock. But since these values are not generated sequentially, it is still very unlikely that you will hit the same number twice. The code generates a 6-character text-string (base64) rather than a decimal number to keep the URLs as short as possible.
Hope this helps. :)