I have a server with multiple websites, of which I want to block only one. I know that robots.txt accepts the following:
User-agent: *
Disallow: /
To block bots from crawling the site, but there is ambiguous language in the articles I read. Some say this will block the site, some say the server.
If this is in the root directory of the site, will it block the site only? Is there some better practice for doing this?