5

I'd like to disable directory traversal like example.com/page/../other-page (even to real pages) in my IIS website. I've tried Request Filtering and URL Rewrites with a custom response.

The Microsoft documentation on the denyUrlSequences part of Request Filtering actually uses .. as an example:

The following example Web.config file will deny access to three URL sequences. The first sequence prevents directory traversal, […]

<configuration>
   <system.webServer>
      <security>
         <requestFiltering>
            <denyUrlSequences>
               <add sequence=".." />
               [...]

…but it doesn't work; example.com/page/../other-page has already become example.com/other-page before the Deny rule ever runs. You can prove this by setting a Deny rule for page/sub and visiting example.com/page/./sub-page. The normalized path is blocked by the rule, but it wouldn't have matched in the original state.

I've tested this on IIS v7.5 and v10, and I imagine it exists in each intervening version, too.

  1. What is doing the normalization? (Probably this library?)
  2. When does it happen in the request lifecycle?
  3. How do I successfully block the following sequences without opening up some security hole? .., ./, and //

Internet searches only want to tell me about a circa-2000 vulnerability in an old version of IIS, or how to enable MVC routes with dots in them.

Debug note: If you use curl to test this behavior, make sure to add the --path-as-is option, so it doesn't do the normalization in the client. Some browsers also appear to be doing client-side normalization.
Usage note: I'm nominally trying to shut down example.com/clubs-baby-seals/../about-us lest someone take the link's successful load as an endorsement of seal mistreatment.

Michael
  • 8,362
  • 6
  • 61
  • 88

2 Answers2

1

It could be anyone that's changing it, from the browser to HTTP.sys or IIS in the end. The important point is whether the user has access to the resource they end up getting. If the user has already access, then there's no security hole since they can just as well type in the final address it was normalized to in the browser. Using the '..' dots does not reveal any additional information than they already have.

If on the other hand you really want to know who changes the path, I would personally do some network tracing along with FREB logging to see what happens at each stage. I don't know the answer readily.

0

While I do not know the specific answer to (1) and (2), it turns out for (3) that the original pre-normalization URI path is available in the UNENCODED_URL server variable. This makes it possible to run URL Rewrite rules against it:

<rule name="Block directory traversal attempts" stopProcessing="true">
    <match url="(.*)" />
    <conditions logicalGrouping="MatchAny" trackAllCaptures="false">
        <add input="{UNENCODED_URL}" pattern="\.\." />
        <add input="{UNENCODED_URL}" pattern="\./" />
        <add input="{UNENCODED_URL}" pattern="//" />
    </conditions>
    <action type="CustomResponse" statusCode="404"
            statusReason="Not Found" statusDescription="Page not found" />
</rule>
Michael
  • 8,362
  • 6
  • 61
  • 88