Your robots.txt MUST be placed in the host root, you can't have a robots.txt in example.com/path/robots.txt
.
So you have to move your robots.txt one level up, to example.com/robots.txt
. And now it's clear that Disallow: /
blocks everything on this host.
If you don't want to give information about your "private" URLs, you could specify only the beginning of those URLs (if possible in your case):
User-agent: *
Disallow: /p
This would block all URLs that start with example.com/p
, like:
example.com/p
example.com/p.html
example.com/path
example.com/path/
example.com/path/foobar
example.com/p12asokd1
If this is not possible (e.g. if your public URLs might start with such characters, too), you could use the robots
meta
element instead.
Note that when using robots.txt
to block URLs, search engines might still index your URLs and link to it in their search results (e.g. when someone links to your private URLs). So these URLs aren't so "private" anymore. When using the meta
way, (polite) search engines won't even index the URL, so that would be an advantage for you.