4

as it isn't really popular to use Origin / X-Frame-Options http header and I don't think the new CSP in Firefox would be better (overhead, complicate, etc.) I want to make a proposal for a new JavaScript / ECMA version.

But first I publish the idea so you can say if its bad. I call it simple jsPolicy:

Everyone who uses JavaScript has placed scripts in his html head. So why don't we use them to add our policies there to control all following scripts. example:

<html>
<head>
<title>Example</title>
<script>
window.policy.inner = ["\nfunction foo(bar) {\n  return bar;\n}\n", "foo(this);"];
</script>
</head>
<body>
<script>
function foo(bar) {
  return bar;
}
</script>
<a href="#" onclick="foo(this);">Click Me</a>
<script>
alert('XSS');
</script>
</body>
</html>

Now the browser compares the <scripts>.innerHTML and the onclick.value with the ones in the policy and so the last script element block is not executed (ignored).

Of course it won't be useful to double all the inline code, so we use checksums instead. example:

crc32("\nfunction foo(bar) {\n  return bar;\n}\n");

results "1077388790"

And now the full example:

if (typeof window.policy != 'undefined') {
  window.policy.inner = ["1077388790", "2501246156"];
  window.policy.url = ["http://code.jquery.com/jquery*.js","http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"];
  window.policy.relative = ["js/*.js"];
  window.policy.report = ["api/xssreport.php"];
}

The browser only needs to compare if the checksum of an inline script is set in the policy.inner or if the script.src URL fits to the policy.url.

Note: The idea behind policy.relative is to allow local scripts only:

window.policy.url = false;
window.policy.relative = ["js/*.js"];

Note: policy.report should be nearly the same as done with CSP (sends blocked scripts and urls to an api):
https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-unofficial-draft-20110315.html#violation-report-syntax

Important:

  • The policy can't be set twice (else it throws a warning) = constant
  • To think about: The policy can only be set in the head (else it throws a warning)
  • The policy is only used to check the scripts that are part of the html source and not those that are placed on-the-fly. example:
    document.write('<script src="http://code.jquery.com/jquery-1.5.2.min.js"></scr' + 'ipt>');
    You don't need a policy.url definition for "http://code.jquery.com..." as the policy.inner checksum validated the complete script source. This means the source is loaded even if policy.url is set to false (yes it's still secure!). This garantuees a simple usage of the policy.
  • if one of the policies is missing there is no limitation. This means that an empty policy.relative results that all local files are allowed. This guarantees backward compatibility
  • if one of the policies is set to "false" no usage is allowed (default is true). example:
    policy.inner = false;
    This disallows any inline scripting
  • The policy only ignores disallowed scripts and throws a warning to the console (an error would stop the execution of allowed scripts and this isn't needed)

I think this would make XSS impossible and instead of CSP it would avoid persistent XSS as well (as long nobody overwrites the Policy) and it would be much easier to update.

What do you think?

EDIT:
Here is an example made in Javascript:
http://www.programmierer-forum.de/php/js-policy-against-xss.php

Of course we can't control the script execution, but it shows how it could work if a jsPolicy compatible browser would.

EDIT2:
Don't think I'm talking about coding a little javascript function to detect xss!! My jsPolicy idea has to be part of a new JavaScript engine. You can compare it to a php-setting placed into the .htaccess file. You can not change this setting in runtime. The same requirements apply to jsPolicy. You can call it a global setting.

jsPolicy in short words:
HTML parser -> send scripts to JavaScript Engine -> compare with jsPolicy -> is allowed?
A) yes, execution through JavaScript Engine
B) no, ignored and send report to webmaster

EDIT3:
Referenced to Mike's comment this would be a possible setting, too:

window.policy.eval = false;
Community
  • 1
  • 1
mgutt
  • 5,867
  • 2
  • 50
  • 77
  • Yes it is. If this would be part of the new Javascript / ECMA version it would solve all XSS problems. Maybe I didn't explain it well. Which part is not clear for you? – mgutt Apr 30 '11 at 10:11
  • I've updated the explanation and added some more examples. I hope its more clear now. – mgutt Apr 30 '11 at 12:13
  • 1
    From MDC: "Note: For security reasons, you can't use the element to configure the X-Content-Security-Policy header." That's a good point. You can't set security parameters on the client side... How is that secure? – Rudie Apr 30 '11 at 12:14
  • @Rudie With my idea it is (if the browser complies with the policy requirements), as the browser controls all script execution depending on the policy. Do you know why the CSP meta is disallowed? I think its because you can overwrite the meta by using JavaScript and it is easier to ignore the meta than to build security rules to the javascript engine that disallows accessing the meta. But there is no comparable weakness in my idea as the policy setting is constant. – mgutt Apr 30 '11 at 12:32
  • But setting the policy in the HTTP headers is at least as secure, don't you agree? I think securer. And it's only a few bytes. Probably less than a cookie (now those are inefficient). **edit** It's also less than the JS you propose to send every request. All solutions have 'overhead'. – Rudie Apr 30 '11 at 13:21
  • @Rudie You have a lot of disadvantage because you need to add headers with apache or on-the-fly with php. And last but not least you raise the overhead. And no, jsPolicy does not have an overhead because the jsPolicy setting will be placed - for sure - in the general external js file (my first example with the inline policy is only to show quick how it would work) and this will be cached. – mgutt Apr 30 '11 at 15:28
  • Good point =) I think you might have something then. Only with XSS it's still possible to circument these new rules (and that's exactly what XSS is). So I wouldn't call it an XSS solution, but definitely browser security related =) Keep us up2date? – Rudie Apr 30 '11 at 15:36
  • If the browser sticks to the jsPolicy you can not circumvent the rules. The only option would be to overwrite the external js file but this would require ftp access and I think then you have a much bigger problem ^^ – mgutt Apr 30 '11 at 15:44
  • XSS also includes executing scripts (on the same website > domain) that weren't meant to be executed. For instance printing bad HTML comments. Those comments (with XSS) could contain javascript. That javascript won't be stopped with your solution (unless it's so crazy you didn't include it in the rules). **That** is XSS. Or am I wrong? I could be... – Rudie Apr 30 '11 at 17:25
  • Yes you are. Example: If you add to a forum database a persistant xss to break through some html tags with `--> – mgutt Apr 30 '11 at 17:59
  • Yeah I get it. But the scripts that you (the developer) allow, can be the same kind as the scripts used as XSS... Inline scripts aren't cool, but everybody uses them (and they should work!!). Another inline script would be some XSS. How do you filter one and not the other? How precise do you make `policy.inner`? – Rudie Apr 30 '11 at 18:45
  • They are filtered through the checksums and the precision is defined through the collision rate of the checksum/hash algorithm ([crc32 colission @ stackoverflow](http://stackoverflow.com/questions/1515914/crc32-collision/1517776#1517776)). This means the checksum of `alert('xss')` (crc32=3414049779) is completly different to `alert('XSS')` (crc32=2462090537). If crc32 is not safe enough we could use [md5](http://en.wikipedia.org/wiki/MD5#Collision_vulnerabilities) instead. – mgutt Apr 30 '11 at 18:57

4 Answers4

3

Cross-site scripting occurs on the client-side. Your policies are defined on the client-side. See the problem?

I like Content Security Policy, and I use it on all of my projects. In fact, I am working on a JavaScript framework, which has one of its requirements "be CSP-friendly."

CSP > crossdomain.xml > your policy.

Tower
  • 98,741
  • 129
  • 357
  • 507
  • CSP is client-side as well. Think about that. You define rules inside the header and the clients browser sticks to the CSP specification and stops execution if needed. If a browser would stick to the jsPolicy it would be the same. I hope you don't think that I'm talking about coding a js function or similar. This should be really an update to the Javascript engine itself and adding new ´global constants`. – mgutt Apr 30 '11 at 18:26
  • You can compare jsPolicy to a php-setting placed into the .htaccess file. It can't be changed in runtime. – mgutt Apr 30 '11 at 18:33
  • CSP is not client-side. The web server sends an HTTP header, which the browser grabs and acts upon it. The whole document with DOM and JavaScript do not even exist at this point. The client-side era starts after CSP. Otherwise, CSP would not work. There is a reason why they decided not to allow setting CSP policies via meta tags. – Tower Jun 11 '11 at 10:45
1

The vast majority of XSS attacks come from "trusted" sources, at least as far as the browser is concerned. They are usually the result of echo'ing user input, e.g. in a forum, and not properly escaping the input. You're never going to get an XSS from linking to jquery, and it is extremely rare that you will from any other linked source.

In the case when you are trying to do cross-domain scripting, you can't get a checksum on the remote script.

So although your idea seems fine, I don't really see a point to it.

  • You are talking about persistent XSS and this isn't covered by CSP, but its with my idea. XSS in a forum post means echo'ing "". But it would be useless because the checksum of alert('xss') is not set in policy.inner. The same is if the xss is "". The url is not part of the policy.url and can't be loaded. Do you now understand my idea? – mgutt Apr 30 '11 at 10:00
  • Ahh, okay. I still don't really see the point for external references, and I think you're asking for a lot of overhead on the developer's part (okay, not _a lot_, but more than simply dropping a link in). As far as the relative goes, `foo.php?js/foo.js` would technically match. In general, I'm starting to warm to the idea, but I think asking people to calculate checksums isn't going to fly especially with new developers that don't know what a checksum is :) People also don't like retyping URLs, especially in systems that may be including JS from 20+ partials or views. –  Apr 30 '11 at 16:21
  • As far as "forum xss" prevention goes, my best idea was to create a `` tag, but that of course prevents speed optimizations that place scripts immediately before `` –  Apr 30 '11 at 16:22
  • @noScriptBelowThis Such a tag would be "similar" to the [{literal}](http://www.smarty.net/docsv2/en/language.function.literal) tag in smarty. A very short -Tag would be ok to minimize overhead. But this won't help against xss inside of scripts like a search form with onsubmit event. But its very easy to unterstand so *thumbs up* for this, too ;) – mgutt Apr 30 '11 at 17:15
  • @high_demand_on_developers Take a look at the [CSP Specification](https://wiki.mozilla.org/Security/CSP/Specification). What would be easier? the checksum thing isn't easy, you're right. But it would be no problem to set those checksums/urls automatically with PHP: 1. obtain `filemtime()` of template file, 2. if changed `preg_match()` scripts, 3. get url or `crc32()` checksum and 4. overwrite policy settings in general js file. But don't think about that. Think about the benefit to disallow **all** inline scripts and allow only one local js files folder and a hand full external scripts as src. – mgutt Apr 30 '11 at 17:33
  • @noScriptBelowThis2 I was to hasty. A starting and closing tag won't work because XSS is able to break through. But I think you meant without a closing tag as your wrote ´BelowThis´. This would be possible, but not useful as you aren't able to place codes between forum posts or scripts into the footer etc. And what about xss before that tag? I knew a website that had a xss hole in the ``-tag ^^ – mgutt Apr 30 '11 at 18:06
1

This idea keeps getting floated and re-floated every so often... and each time security experts debunk it.
Don't mean to sound harsh, but this is not a development problem, it is a security problem. Specifically, most developers don't realize how many variants, vectors, exploits and evasion techniques there are.

As some of the other answers here mentioned, the problem is that your solution does not solve the problem, of whether or not to trust whatever arrives at the browser, since on the client side you have no way of knowing what is code, and what is data. Even your solution does not prevent this.

See e.g. this question on ITsec.SE for some of the practical issues with implementing this. (your question is kinda a duplicate of that one, more or less... )

Btw, re CSP - check this other question on ITsec.SE.

Community
  • 1
  • 1
AviD
  • 12,944
  • 7
  • 61
  • 91
  • @whitelist_dom_idea nice to see that others had similar ideas but it had a main problem. it wasn't safe against nested xss that breaks the whitelist elements. But where is the hole in jsPolicy? – mgutt May 01 '11 at 21:28
  • @code_or_data Why don't you know that? Only if it is code it is forwarded to the javascript engine. Even if the HTML parser has a bug or a browser addon, so data is treated as code and is forwarded to the engine the jsPolicy comes first as it is part of the engine and not the parser. – mgutt May 01 '11 at 21:34
0

The policy is only used to check the scripts that are part of the html source and not those that are placed on-the-fly. example: document.write(''); You don't need a policy.url definition for "http://code.jquery.com..." as the policy.inner checksum validated the complete script source. This means the source is loaded even if policy.url is set to false (yes it's still secure!). This garantuees a simple usage of the policy.

It seems like you've given the whole game away here.

If I have code like

// Pull parameters out of query string.
var match = location.search.match(/[&?]([^&=]+)=([^&]*)/);

window[decodeURIComponent(match[1])](decodeURIComponent(match[2]));

and someone tricks a user into visiting my site with the query string ?eval=alert%28%22pwned%22%29 then they've been XSSed, and your policy has done nothing to stop it.

Mike Samuel
  • 118,113
  • 30
  • 216
  • 245
  • Nice! But isn't it as dumb as `include($_GET['page'] . '.php');` or using "123456" as admin password? And don't realizing the idea although you are able to solve 99% of all XSS?! I really thank you for this example, but if eval (by POST/GET/COOKIE) is the only hole in the system and only if the code is unsafe, what about extending the policy by `policy.eval = false`? And CSP won't help against your attack, too. – mgutt May 01 '11 at 22:03
  • The holes in the system are huge. `javascript:` in URLs from third parties, XSS via HTML from third parties. Actually, it's probably easier to list the XSS vectors it does stop than the ones it doesn't. – Mike Samuel May 02 '11 at 00:58
  • Please give code examples. How do you want to place javascript code inside an url of a third party script. Or do you mean if the third party server like jquery is hacked? No technique would help here. Thats the reason why you should use local scripts only if you don't trust third parties. But this isn't a hole in the Policy then. Its a hole at the third partie. Or do you mean xss inside a script.src url? This won't work as javascript:alert('xss') means adding an inline script thats not allowed through the policy. and what means html from third parties? if its inline code its not allowed, too. – mgutt May 02 '11 at 22:33
  • @mgutt, `myLink.href = linkFromThirdParty` is not covered by your policy. Neither is CSS like `p { color: expression(alert(1337)) }`, nor `myDiv.innerHTML = textFromThirdParty`. By only checking scripts, you are missing most of the XSS vectors out there. – Mike Samuel May 03 '11 at 00:55
  • css is covered as it is inline code. I don't know how myLink.href is filled. Do you mean if someone hacks the jquery server and places bad code into `jquery-1.5.2.min.js`? xss is only the result then (server attack goes first). And if you don't trust jquery you need to copy the file to localhost. – mgutt May 03 '11 at 07:38
  • ThirdParty should be covered. As long the `varFromThirdParty` is defined inside an inline script its covered by the policy.inner and as long its defined inside an external file policy.url covers it. If it is defined through the browser location its unsafe javascript code. If you set `policy.url = ["http://example.org/tool.js"];` you trust example.org to be safe. And their isn't any security against results of server attacks (except you invent something like "script-url-md5-hash-database.com" where a browser needs to compare the file hashs: possible, but slow). – mgutt May 03 '11 at 07:48
  • @mgutt, the definition of `varFromThirdParty` is not the source of the dangerous content. The string that that `var` references is the source of the dangerous content. – Mike Samuel May 03 '11 at 12:48
  • And where comes the string from? You need an inline or external script to do that (covered by jsPolicy). Or as you said the browser location match (which is unsafe coding). – mgutt May 04 '11 at 19:02