Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
26
votes
5 answers

How do I allow Google to index login-required parts of my site?

It seems like Google can index certain sites or forums (I can't name any offhand as its been months since I last saw one) and when accessing you are prompted with a request to register or login. How would I make my site open for Google to index and…
user34537
26
votes
3 answers

How can I safely check is node empty or not? (Symfony 2 Crawler)

When I try to take some nonexistent content from page I catch this error: The current node list is empty. 500 Internal Server Error - InvalidArgumentException How can I safely check exists this content or not? Here some examples that does not…
user1581663
25
votes
5 answers

Robots.txt: allow only major SE

Is there a way to configure the robots.txt so that the site accepts visits ONLY from Google, Yahoo! and MSN spiders?
vyger
25
votes
4 answers

scrapy- how to stop Redirect (302)

I'm trying to crawl a url using Scrapy. But it redirects me to page that doesn't exist. Redirecting (302) to
user_2000
  • 1,103
  • 3
  • 14
  • 26
24
votes
2 answers

How do I use the Python Scrapy module to list all the URLs from my website?

I want to use the Python Scrapy module to scrape all the URLs from my website and write the list to a file. I looked in the examples but didn't see any simple example to do this.
Adam F
  • 1,151
  • 1
  • 11
  • 16
24
votes
8 answers

Facebook crawler is hitting my server hard and ignoring directives. Accessing same resources multiple times

The Facebook Crawler is hitting my servers multiple times every second and it seems to be ignoring both the Expires header and the og:ttl property. In some cases, it is accessing the same og:image resource multiple times over the space of 1-5…
Wayne Whitty
  • 19,513
  • 7
  • 44
  • 66
24
votes
5 answers

What is the easiest way to run python scripts in a cloud server?

I have a web crawling python script that takes hours to complete, and is infeasible to run in its entirety on my local machine. Is there a convenient way to deploy this to a simple web server? The script basically downloads webpages into text files.…
user1330691
  • 997
  • 4
  • 11
  • 13
24
votes
1 answer

What is the difference between Scrapy's spider middleware and downloader middleware?

Both middleware can process Request and Response. But what is the difference?
Zhang Jiuzhou
  • 759
  • 8
  • 22
23
votes
5 answers

How to set up a robot.txt which only allows the default page of a site

Say I have a site on http://example.com. I would really like allowing bots to see the home page, but any other page need to blocked as it is pointless to spider. In other words http://example.com & http://example.com/ should be allowed, but…
Boaz
  • 25,331
  • 21
  • 69
  • 77
23
votes
4 answers

Selenium wait for Ajax content to load - universal approach

Is there a universal approach for Selenium to wait till all ajax content has loaded? (not tied to a specific website - so it works for every ajax website)
Fabian Lurz
  • 2,029
  • 6
  • 26
  • 52
22
votes
3 answers

Very Simple C++ Web Crawler/Spider?

I am trying to do a very simple web crawler/spider app in C++. I have been searching using Google for a simple one to understand the concept. I found this: spider_simpleCrawler However, it is complicated to understand for me, since I started…
popurity09
  • 255
  • 1
  • 3
  • 7
22
votes
6 answers

Alternative to HtmlUnit

I have been researching about the headless browsers available till to date and found HtmlUnit being used pretty extensively. Do we have any alternative to HtmlUnit with possible advantage compared to HtmlUnit? Thanks Nayn
Nayn
  • 3,594
  • 8
  • 38
  • 48
22
votes
6 answers

how do web crawlers handle javascript

Today a lot of content on Internet is generated using JavaScript (specifically by background AJAX calls). I was wondering how web crawlers like Google handle them. Are they aware of JavaScript? Do they have a built-in JavaScript engine? Or do they…
Shailesh Kumar
  • 6,457
  • 8
  • 35
  • 60
22
votes
12 answers

Java Web Crawler Libraries

I wanted to make a Java based web crawler for an experiment. I heard that making a Web Crawler in Java was the way to go if this is your first time. However, I have two important questions. How will my program 'visit' or 'connect' to web pages?…
CodeKingPlusPlus
  • 15,383
  • 51
  • 135
  • 216
21
votes
9 answers

HTTPWebResponse + StreamReader Very Slow

I'm trying to implement a limited web crawler in C# (for a few hundred sites only) using HttpWebResponse.GetResponse() and Streamreader.ReadToEnd() , also tried using StreamReader.Read() and a loop to build my HTML string. I'm only downloading pages…
Roey
  • 849
  • 2
  • 11
  • 20