Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
11
votes
2 answers

How to scrape all the content of each link with scrapy?

I am new with scrapy I would like to extract all the content of each advertise from this website. So I tried the following: from scrapy.spiders import Spider from craigslist_sample.items import CraigslistSampleItem from scrapy.selector import…
student
  • 347
  • 3
  • 13
11
votes
1 answer

Crawler4j vs. Jsoup for the pages crawling and parsing in Java

I want to get the content of a page and extract the specific parts of it. As far as I know, there are at least two solutions for such task: Crawler4j and Jsoup. Both of them are capable retrieving the content of a page and extract sub-parts of it.…
Mike
  • 14,010
  • 29
  • 101
  • 161
11
votes
1 answer

Difference between scraper, crawler and spider in the context of Scrapy

Trying to read the code of Scrapy. The words scaper, crawler and spider are confusing. For example scrapy.core.scraper scrapy.crawler scrapy.spiders Could anyone explain the meanings and differences of these terms in the context of Scrapy? Thanks…
Frozen Flame
  • 3,135
  • 2
  • 23
  • 35
11
votes
5 answers

how to tell if a web request is coming from google's crawler?

From the HTTP server's perspective.
orph
  • 113
  • 1
  • 1
  • 5
11
votes
9 answers

What are the key considerations when creating a web crawler?

I just started thinking about creating/customizing a web crawler today, and know very little about web crawler/robot etiquette. A majority of the writings on etiquette I've found seem old and awkward, so I'd like to get some current (and practical)…
Ian Robinson
  • 16,892
  • 8
  • 47
  • 61
11
votes
3 answers

Selenium pdf automatic download not working

I am new to selenium and I am writing a scraper to download pdf files automatically from a given site. Below is my code: from selenium import webdriver fp =…
Gaara
  • 695
  • 3
  • 8
  • 23
11
votes
2 answers

How to build a web crawler based on Scrapy to run forever?

I want to build a web crawler based on Scrapy to grab news pictures from several news portal website. I want to this crawler to be: Run forever Means it will periodical re-visit some portal pages to get updates. Schedule priorities. Give different…
superb
  • 963
  • 1
  • 10
  • 21
11
votes
1 answer

Best solution to host a crawler?

I have a crawler that crawl a few different domains for new posts/content. The total amount of content is hundred of thousands of pages, and there is a lot of new content added each day. So to be able to crawl through all this content, I need my…
Marcus Lind
  • 10,374
  • 7
  • 58
  • 112
11
votes
1 answer

Websites that are particularly challenging to crawl and scrape?

I'm interested in public facing sites (nothing behind a login / authentication) that have things like: High use of internal 301 and 302 redirects Anti-scraping measures (but not banning crawlers via robots.txt) Non-semantic, or invalid…
David Pratt
  • 703
  • 7
  • 16
11
votes
3 answers

Crawling the Google Play store

I want to crawl the Google Play store to download the web pages of all the android application (All the webpages with the following base url: https://play.google.com/store/apps/). I checked the robots.txt file of the play store and it disallows…
Naruto Uzumaki
  • 958
  • 4
  • 17
  • 37
11
votes
3 answers

Scrapy - Select specific link based on text

This should be easy but I'm stuck.
Link Text 2 |
hoof_hearted
  • 675
  • 1
  • 9
  • 18
11
votes
3 answers

What database for crawler/scraper?

I am currently researching what database to use for a project I am working on. Hopefully you guys can give me some hints. The project is an automated web crawler that checks websites as per a user's request, scrapes data under certain circumstances,…
KonstantinK
  • 757
  • 1
  • 8
  • 23
10
votes
6 answers

Tor Web Crawler

Ok, here's what I need. I have a PHP based web crawler. It is accessible here: http://rz7ocnxxu7ka6ncv.onion/ Now, my problem is that my spider that actually crawls pages needs to do so on a SOCKS port 9050. The thing is, I have to tunnel its…
user1203301
  • 101
  • 1
  • 1
  • 3
10
votes
4 answers

Get past request limit in crawling a web site

I'm working on a web crawler that indexes sites that don't want to be indexed. My first attempt: I wrote a c# crawler that goes through each and every page and downloads them. This resulted in my IP being blocked by their servers within 10…
brandon
  • 1,230
  • 3
  • 13
  • 31
10
votes
2 answers

get out links from nutch

I am using nutch 1.3 to crawl a website. I want to get a list of urls crawled, and urls originating from a page. I get list of urls crawled using readdb command. bin/nutch readdb crawl/crawldb -dump file Is there a way to find out urls that are on…
surajz
  • 3,471
  • 3
  • 32
  • 38