Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
17
votes
3 answers

prevent NodeJS program from exiting

I am creating NodeJS based crawler, which is working with node-cron package and I need to prevent entry script from exiting since application should run forever as cron and will execute crawlers at certain periods with logs. In the web application,…
Aren Hovsepyan
  • 1,947
  • 2
  • 17
  • 45
17
votes
5 answers

Protecting email addresses from spam bots / web crawlers

How do you prevent emails being gathered from web pages by email spiders? Does mailto: linking them increase the likelihood of them being picked up? Is URL-encoding useful? Obviously the best counter-measure is to only show email addresses to…
Zaz
  • 46,476
  • 14
  • 84
  • 101
17
votes
1 answer

How to improve SEO for single page application

We have built a search-engine for vacancies. For reasons of speed and a good user-experience, we used a the architecture of a “Single Page Application” (SPA). We know that for a SPA-architecture it is a challenge to enable SEO, so we did quite a…
17
votes
2 answers

Python Scrapy on offline (local) data

I have a 270MB dataset (10000 html files) on my computer. Can I use Scrapy to crawl this dataset locally? How?
Sagi
  • 329
  • 1
  • 4
  • 13
17
votes
4 answers

Is there a list of known web crawlers?

I'm trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I'm not sure, they may or may not be a web crawler and they are causing many downloads…
Pablo Fernandez
  • 279,434
  • 135
  • 377
  • 622
17
votes
1 answer

Are Meta Keywords Case Sensitive?

Is the same thing as Are both keywords considered the same by a web crawler?
Lloyd Banks
  • 35,740
  • 58
  • 156
  • 248
16
votes
6 answers

How do I remove a query from a url?

I am using scrapy to crawl a site which seems to be appending random values to the query string at the end of each URL. This is turning the crawl into a sort of an infinite loop. How do i make scrapy to neglect the query string part of the URL's?
Sanket Gupta
  • 573
  • 3
  • 6
  • 21
16
votes
3 answers

How do you spider with PhantomJS

I am trying to leverage PhantomJS and spider an entire domain. I want to start at the root domain e.g. www.domain.com - pull all links (a.href) and then have a que of fetching each new links and adding new links to the que if they haven't been…
John Murch
  • 199
  • 1
  • 1
  • 5
16
votes
3 answers

How do I prevent Bing from swamping my site with traffic irregularly?

Bingbot will hit my site pretty hard for a couple of hours each day, and will be extremely light for the rest of the time. I'd either like to smooth out its crawls, reduce its rate limit, or block it altogether. It doesn't really send through any…
Tim Haines
  • 1,496
  • 3
  • 14
  • 16
16
votes
3 answers

Python 3 - Add custom headers to urllib.request Request

In Python 3, the following code obtains the HTML source for a webpage. import urllib.request url = "https://docs.python.org/3.4/howto/urllib2.html" response = urllib.request.urlopen(url) response.read() How can I add the following custom header to…
rovyko
  • 4,068
  • 5
  • 32
  • 44
16
votes
6 answers

How to scrape all contents from infinite scroll website? scrapy

I'm using scrapy. The website i'm using has infinite scroll. the website has loads of posts but i only scraped 13. How to scrape the rest of the posts? here's my code: class exampleSpider(scrapy.Spider): name = "example" #from_date =…
Michimcchicken
  • 543
  • 1
  • 6
  • 21
16
votes
1 answer

Make Scrapy follow links and collect data

I am trying to write program in Scrapy to open links and collect data from this tag:

. I've managed to make Scrapy collect all the links from given URL but not to follow them. Any help is very appreciated.
Arkan Kalu
  • 403
  • 2
  • 4
  • 16
16
votes
7 answers

Recommendations for a spidering tool to use with Lucene or Solr?

What is a good crawler (spider) to use against HTML and XML documents (local or web-based) and that works well in the Lucene / Solr solution space? Could be Java-based but does not have to be.
BuddyJoe
  • 69,735
  • 114
  • 291
  • 466
16
votes
3 answers

Scrapy, only follow internal URLS but extract all links found

I want to get all external links from a given website using Scrapy. Using the following code the spider crawls external links as well: from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors import LinkExtractor from…
sboss
  • 957
  • 1
  • 7
  • 21
16
votes
6 answers

What's a good Web Crawler tool

I need to index a whole lot of webpages, what good webcrawler utilities are there? I'm preferably after something that .NET can talk to, but that's not a showstopper. What I really need is something that I can give a site url to & it will follow…
Glenn Slaven
  • 33,720
  • 26
  • 113
  • 165