7

my website is often down because a spider is accessying to many resources. This is what the hosting told me. They told me to ban these IP address: 46.229.164.98 46.229.164.100 46.229.164.101

But I've no idea about how to do this.

I've googled a bit and I've now added these lines to .htaccess in the root:

# allow all except those indicated here
<Files *>
order allow,deny
allow from all
deny from 46.229.164.98
deny from 46.229.164.100
deny from 46.229.164.101
</Files>

Is this 100% correct? What could I do? Please help me. Really I don't have any idea about what I should do.

testermaster
  • 1,031
  • 6
  • 21
  • 40
  • looks like an .htaccess file not a robot.txt ,you should talk to the "hosting" and ask for precisions. – mpm May 13 '14 at 12:55
  • Yes I did a mistake, now I've removed these lines from robots.txt and write them into .htaccess. They just told me that these spiders where using too much resources and that I should ban their address :/ – testermaster May 13 '14 at 13:00
  • So are you having a specific problem with what you posted? – Patrick Q May 13 '14 at 13:04
  • @PatrickQ no, I had a problem 1 hrs ago, now my hosting shutted down the website and is waiting for me to find a solution... – testermaster May 13 '14 at 13:08

2 Answers2

29

based on these

https://www.projecthoneypot.org/ip_46.229.164.98 https://www.projecthoneypot.org/ip_46.229.164.100 https://www.projecthoneypot.org/ip_46.229.164.101

it looks like the bot is http://www.semrush.com/bot.html

if thats actually the robot, in their page they say

To remove our bot from crawling your site simply insert the following lines to your
"robots.txt" file:

User-agent: SemrushBot
Disallow: /

Of course that does not guarantee that the bot will obey the rules. You can block him in several ways. .htaccess is one. Just like you did it.

Also you can do this little trick, deny ANY ip address that has "SemrushBot" in user agent string

Options +FollowSymlinks  
RewriteEngine On  
RewriteBase /  
SetEnvIfNoCase User-Agent "^SemrushBot" bad_user
SetEnvIfNoCase User-Agent "^WhateverElseBadUserAgentHere" bad_user
Deny from env=bad_user

This way will block other IP's that the bot may use.

see more on blocking by user agent string : https://stackoverflow.com/a/7372572/953684

Should i add, that if your site is down by a spider, usually it means you have a bad-written script or a very weak server.

edit:

this line

SetEnvIfNoCase User-Agent "^SemrushBot" bad_user

tries to match if User-Agent begins with the string SemrushBot (the caret ^ means "beginning with"). if you want to search for let's say SemrushBot ANYWHERE in the User-Agent string, simply remove the caret so it becomes:

SetEnvIfNoCase User-Agent "SemrushBot" bad_user

the above means if User-Agent contains the string SemrushBot anywhere (yes, no need for .*).

Community
  • 1
  • 1
Sharky
  • 6,154
  • 3
  • 39
  • 72
  • I'm using vBulletin, and now the traffic is 1/3 then the highest traffic I've had in the latest months. When traffic was 100%, website was ok :/ – testermaster May 13 '14 at 13:15
  • 1
    @daimpa be sure to check your logs. if one spider comes, i can guarantee more will come. its like having your telephone number known to advertising companies. once they spot you, they will call you forever, using different names. – Sharky May 13 '14 at 13:23
  • @Sharky: You start the expression with caret '^' Does that mean it only matches if the string is at the beginning of the User-Agent? (And I wonder how long before these uncivilized _____ change their User-Agent.) – WGroleau Jan 09 '16 at 18:17
  • @WGroleau exactly, caret means at the beginning of string. from my experience they don't change their user-agents strings. if someone wants to avoid being blocked, usually changes its user-agent to googlebot or something. anyway keep looking on your logs and build a list of bad user-agents that crawling your sites and block them. BUT be sure to add `Disallow` on robots.txt **whenever applicable** (so they will stop crawling you - because you cannot rely on just blocking from your end, because even blocking it costs resources) and hope for the best... those _______!!! – Sharky Jan 11 '16 at 08:46
-3

You are doing the right thing BUT

You have to write that code in .htaccess file , not in Robots.txt File.

For denying any Search Engine from crawling your site, the code should like this

User-Agent:Google
Disallow:/ 

It will disallow Google from crawling your Site.

I would prefer .htaccess method by the way.

YourFriend
  • 434
  • 4
  • 7