2

I am working on a project to crawl a small web directory and have implemented a crawler using crawler4j. I know that RobotstxtServer should be checking to see if a file is allow/disallowed by the robots.txt file, but mine is still showing a directory that should not be visited.

I have read over the source code and my code many times but I can't seem to figure out why this is. In short, why isn't my program recognizing the /donotgohere/ file that the robots.txt file says not to do to?

Below is my code for the program. Any help would be awesome. Thank you!

Crawler:

package crawler_Project1_AndrewCranmer;
import java.util.Set;
import java.util.regex.Pattern;
import java.io.IOException;
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.HtmlParseData;
import edu.uci.ics.crawler4j.url.WebURL;

public class MyCrawler extends WebCrawler
{
    private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|gif|jpg|png|mp3|mp3|zip|gz))$");

    @Override public boolean shouldVisit(Page referringPage, WebURL url)
    {
        String href = url.getURL().toLowerCase();
        return !FILTERS.matcher(href).matches()
                && href.startsWith("http://lyle.smu.edu/~fmoore");  
    }

    @Override public void visit(Page page)
    {
        String url = page.getWebURL().getURL();
        System.out.println("URL: " + url);
        if(page.getParseData() instanceof HtmlParseData)
        {
            HtmlParseData h = (HtmlParseData)page.getParseData();
            String text = h.getText();
            String html = h.getHtml();
            Set<WebURL> links = h.getOutgoingUrls();
        }
    }
}

Controller:

package crawler_Project1_AndrewCranmer;
import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;

public class Controller 
{
    public static void main(String[] args) throws Exception
    {
        int numberOfCrawlers = 1;
        String crawlStorageFolder = "/data/crawl/root";

        CrawlConfig c = new CrawlConfig();
        c.setCrawlStorageFolder(crawlStorageFolder);
        c.setMaxDepthOfCrawling(-1);    //Unlimited Depth
        c.setMaxPagesToFetch(-1);       //Unlimited Pages
        c.setPolitenessDelay(200);      //Politeness Delay

        PageFetcher pf = new PageFetcher(c);
        RobotstxtConfig robots = new RobotstxtConfig();
        RobotstxtServer rs = new RobotstxtServer(robots, pf);
        CrawlController controller = new CrawlController(c, pf, rs);

        controller.addSeed("http://lyle.smu.edu/~fmoore");

        controller.start(MyCrawler.class, numberOfCrawlers);

        controller.shutdown();
        controller.waitUntilFinish();
    }
}
drewfiss90
  • 53
  • 5

1 Answers1

4

crawler4j uses an URL canonicalization process. According to the robotstxt.org website, the de-facto standard, only specifies robots.txt files on the domain root. For this reason, crawler4j will only search there for robots.txt.

In your case http://lyle.smu.edu/ does not provide a robots.txt at http://lyle.smu.edu/robots.txt (this will give a HTTP 404).

Your robots.txt is located here http://lyle.smu.edu/~fmoore/robots.txt, but the framework will only look at the domain root (as the de-facto standard specifies) to find this file. For this reason, it will ignore the directives, declared in your case.

rzo1
  • 5,561
  • 3
  • 25
  • 64