As I know, if we want to prevent robots accessing our web sites we have to parse 'User-Agent' header in http request then check whether the request coming from robots or browsers.
I think we can not completely prevent robot accessing our web sites because someone can program to use any http client to send Http request with FAKE browser user-agent so for this case, we can not know fake user-agent is real user-agent coming from a browser or coming from a robot program (by programmed).
My question is there is any way to prevent completely robot accessing our web sites?