You can't put full URLs into robots.txt
disallow rules. Your proposed rule WON'T WORK as written:
# INCORRECT
Disallow: https://jobs.example.com/
It looks like you might be trying to disallow crawling on the jobs
subdomain. Doing so is possible. Each subdomain gets its own robots.txt file. You would have to configure you server to have different content for different robots.txt
files:
https://example.com/robots.txt
https://jobs.example.com/robots.txt
Then your jobs robot.txt
should disallow all crawling on that subdomain:
User-Agent: *
Disallow: /
If you are trying to disallow just the home page for that subdomain, you would have to use syntax that only the major search engines understand. You can use a $
for "ends with" and the major search engines will intepret it correctly:
User-Agent: *
Disallow: /$