We run apache (on windows) and NGINX (on CentOS) development servers. I have the problem that Google some how keeps managing to get hold of the development addresses and indexes them (could it be from the Chrome address bar?) Is there a way of blocking all traffic from bots/spiders at a server level, before having to resort to individual robots.txt files in each site, or password only access?
A related problem is on the live environment (NGINX on CentOS) where we use a static asset domain to serve images and js etc, again, Google has gone and indexed this within it's search results, is there a way to prevent this?