0

I am wondering whether is possible to detect if requested page has been cached before or not.

Idea: When you visit malicious webpage, the site redirect you (using ajax) to www.stackoverflow.com for example and check if it is cached or not, with a big list of sites attacker can steal your web-browser history.

Realisation: My current idea is start timer, launch get request on a page, wait page to load, then stop the timer, using difference between dates. Then launch the request again, this time with cache: false flag and then compare both times to determine whether page was cached or not.

Question: Is this possible scenario and more important how to prevent this kind of attack.

Thank you for your answers!

A J
  • 161
  • 1
  • 1
  • 12
  • This sounds like you're asking from the view-point of a web user rather than a site / application developer - *"how do I protect myself and my browser from an arbitrary website"*? This is really the responsibility of browser vendors to guard against. For example a privacy issue similar to this involving the `:visited` colour state of hyperlinks was used a few years ago to achieve the same thing - I think it's abuse has since been restricted. – Emissary Jul 30 '16 at 16:53
  • I saw that bug with `:visited`, i am wondering if is possible to use caching for attack privacy. – A J Jul 30 '16 at 17:04
  • I doubt it, XSS policies are pretty well covered - even if it could be done the attack you are describing doesn't sound reliable. It's impractical, the volume of data per-site and the client-bandwidth are unknown quantities - it would likely take too long to be viable. – Emissary Jul 30 '16 at 17:11
  • To solve problem with giant pages: what if ajax targets specific file (.css for example, not the whole page) large just enough to determine whether is cached or not. – A J Jul 30 '16 at 19:59
  • I've tested injecting foreign images into random sites and on chrome at least, static resources are cached behind the domain in the address bar regardless of where they came from. ***i.e.*** files are always downloaded once even if they've been found elsewhere on a different site, so this won't work. But even if it did, for most resources this would only tell you that the user has visited a domain - it wouldn't determine specific pages or user interests. And then there's things like no-cache headers and files served from CDNs. I'm not sure what you are worried about, it's totally impractical. – Emissary Jul 30 '16 at 20:52
  • So that's a no go... I am worried about that one site can steal all my web history (without accessing browser local profile on hard drive) using large list of sites and the script that try to load them. I have discovered that i can log history in flight with monitoring application title in task manager, now am trying to find a way without local access. – A J Jul 30 '16 at 21:04

0 Answers0