0

I am struggling to make mod_security completely bypass bots I trust.

I wrote these example instructions at the bottom of my /etc/modsecurity/modsecurity.conf

SecRule REQUEST_HEADERS:User-Agent "Googlebot" id:'9999991',phase:1,nolog,allow,ctl:ruleEngine=off
SecRule REQUEST_HEADERS:User-Agent "bing" id:'9999992',phase:1,nolog,allow,ctl:ruleEngine=off
SecRule REQUEST_HEADERS:User-Agent "facebookexternalhit" id:'9999993',phase:1,nolog,allow,ctl:ruleEngine=off

Is that the right way to go?

Fabio B.
  • 9,138
  • 25
  • 105
  • 177

1 Answers1

1

This is not a good idea.

Many attacks can forge the user-agent and you will open a gap to overpass all the rules of your mod_sec.

The best way to not affect the search robots is avoiding false/positive in your mod_sec rule, track the logs frequently, and configure your firewall to bypass the reverse IPs of the search bots.

.configserver.com
.configserver.co.uk
.googlebot.com
.crawl.yahoo.net
.search.msn.com