Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
10
votes
6 answers

How to crawl entire Wikipedia?

I've tried WebSphinx application. I realize if I put wikipedia.org as the starting URL, it will not crawl further. Hence, how to actually crawl the entire Wikipedia? Can anyone gimme some guidelines? Do I need to specifically go and find those URLs…
Mr CooL
  • 1,529
  • 8
  • 23
  • 27
10
votes
1 answer

InvalidArgumentException: The current node list is empty. PHP-Spider (DOMCrawler Symfony)

I'm using PHP-Spider to crawl a website but when it can't find a .class it throws an error: InvalidArgumentException: The current node list is empty. The code is this: foreach ($spider->getPersistenceHandler() as $resource) { echo…
DimitrisBor
  • 297
  • 3
  • 12
10
votes
2 answers

Should I use different case-spellings for case-insensitive directories in robots.txt?

Unfortunately, I’ve got case-insensitive servers that cannot be replaced in the short term. Some directories need to be excluded from crawling, so I have to Disallow them in my robots.txt. Let’s take /Img/ as example. If I keep it all lower…
dakab
  • 5,379
  • 9
  • 43
  • 67
10
votes
1 answer

Scrapy SgmlLinkExtractor is ignoring allowed links

Please take a look at this spider example in Scrapy documentation. The explanation is: This spider would start crawling example.com’s home page, collecting category links, and item links, parsing the latter with the parse_item method. For each item…
Zeynel
  • 13,145
  • 31
  • 100
  • 145
10
votes
4 answers

How to get casper.js http.status code?

I have simple code below: var casper = require("casper").create({ }), utils = require('utils'), http = require('http'), fs = require('fs'); casper.start(); casper.thenOpen('http://www.yahoo.com/', function() { …
HP.
  • 19,226
  • 53
  • 154
  • 253
10
votes
4 answers

Exclude bots and spiders from a View counter in PHP

I have built a pretty basic advertisement manager for a website in PHP. I say basic because it's not complex like Google or Facebook ads or even most high end ad servers. Doesn't handle payments or anything or even targeting users. It serves the…
JasonDavis
  • 48,204
  • 100
  • 318
  • 537
10
votes
5 answers

How to allow crawlers access to index.php only, using robots.txt?

If i want to only allow crawlers to access index.php, will this work? User-agent: * Disallow: / Allow: /index.php
todd
  • 101
  • 1
  • 1
  • 3
10
votes
2 answers

Facebook requests for {url}/no_facebook_preview_picture.jpg on 404 links

We operate a URL shortener, over the last week or so we've started seeing lots of weird requests for {normal url}/no_facebook_preview_picture.jpg from Facebook owned IPs and the user agent facebookexternalhit/1.0…
Blank
  • 4,635
  • 5
  • 33
  • 53
10
votes
3 answers

Recrawl URL with Nutch just for updated sites

I crawled one URL with Nutch 2.1 and then I want to re-crawl pages after they got updated. How can I do this? How can I know that a page is updated?
Ilce MKD
  • 245
  • 3
  • 7
10
votes
1 answer

How to extend Nutch for article crawling

I'm look for a framework to grab articles, then I find Nutch 2.1. Here's my plan and questions in each: 1 Add article list pages into url/seed.txt Here's one problem. What I actually want to be indexed is the article pages, not the article list…
user1633272
  • 2,007
  • 5
  • 25
  • 48
10
votes
4 answers

Do modern web crawlers use the click event or navigate directly to href on anchor tags?

I'm building a web site that I want to behave fancy-like for users, but want web crawlers to still be able to navigate properly. I have the following anchor tag: Projects With the following…
Mike Gwilt
  • 2,399
  • 1
  • 16
  • 14
9
votes
2 answers

How to select a Radio Button using Mechanize in Ruby?

i am building a crawler and i am using Mechanize. I wish to click on a radio button. How do i do that ? Like for example there are two radio buttons say 'A' and 'B'. The website automatically selects B, but i want 'A' using Mechanize in ruby. I am…
Kaushik Thirthappa
  • 1,041
  • 2
  • 9
  • 21
9
votes
3 answers

Error when I scrape Instagram accounts. Adding `?__a=1` to the URL doesn't work anymore. Any clues?

Until 2 days ago, I was able to scrape Instagram accounts by adding ?__a=1 at the end of the URL. E.g.: https://www.instagram.com/xavi/?__a=1 Now, when I do the same thing I get this response: for (;;); { "__ar": 1, "error": 1357004, …
9
votes
4 answers

How to detect if a site lets you upload files?

I would like to be able to tell if a site lets you upload files. I can think of two main ways sites do it and ideally I'd like to be able to detect both: Button Drag & Drop PhantomJS documentation has this example snippet: var webPage =…
rudolfovic
  • 3,163
  • 2
  • 14
  • 38
9
votes
2 answers

Interview question: Honeypots and web crawlers

I was recently reading a book as prep for an interview and came across the following question: What will you do when your crawler runs into a honey pot that generates an infinite subgraph for you to wander about? I wanted to get some solutions to…
OckhamsRazor
  • 4,824
  • 13
  • 51
  • 88