Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
35
votes
6 answers

How to give URL to scrapy for crawling?

I want to use scrapy for crawling web pages. Is there a way to pass the start URL from the terminal itself? It is given in the documentation that either the name of the spider or the URL can be given, but when i given the url it throws an…
G Gill
  • 1,087
  • 1
  • 12
  • 24
35
votes
3 answers

unknown command: crawl error

I am a newbie to python. I am running python 2.7.3 version 32 bit on 64 bit OS. (I tried 64 bit but it didn't workout). I followed the tutorial and installed scrapy on my machine. I have created one project, demoz. But when I enter scrapy crawl…
Nits
  • 629
  • 1
  • 7
  • 16
33
votes
5 answers

How can I scrape pages with dynamic content using node.js?

I am trying to scrape a website but I don't get some of the elements, because these elements are dynamically created. I use the cheerio in node.js and My code is below. var request = require('request'); var cheerio = require('cheerio'); var url =…
JayD
  • 15,483
  • 5
  • 15
  • 14
31
votes
3 answers

Send Post Request in Scrapy

I am trying to crawl the latest reviews from google play store and to get that I need to make a post request. With the Postman, it works and I get desired response. but a post request in terminal gives me a server error For ex: this page…
Amit Tripathi
  • 7,003
  • 6
  • 32
  • 58
29
votes
4 answers

I need a Powerful Web Scraper library

I need a powerful web scraper library for mining contents from web. That can be paid or free both will be fine for me. Please suggest me a library or better way for mining the data and store in my preferred database. I have searched but i didn't…
Pankaj Mishra
  • 20,197
  • 16
  • 66
  • 103
29
votes
6 answers

Scrapy - Reactor not Restartable

with: from twisted.internet import reactor from scrapy.crawler import CrawlerProcess I've always ran this process sucessfully: process = CrawlerProcess(get_project_settings()) process.crawl(*args) # the script will block here until the crawling is…
8-Bit Borges
  • 9,643
  • 29
  • 101
  • 198
29
votes
2 answers

How to force scrapy to crawl duplicate url?

I am learning Scrapy a web crawling framework. by default it does not crawl duplicate urls or urls which scrapy have already crawled. How to make Scrapy to crawl duplicate urls or urls which have already crawled? I tried to find out on internet…
Alok
  • 7,734
  • 8
  • 55
  • 100
29
votes
3 answers

Difference between find and filter in jquery

I'm working on fetching data from wiki pages. I'm using a combination of php and jquery to do this. First I am using curl in php to fetch page contents and echoing the content. The filename is content.php: $url = $_GET['url']; $url = trim($url,"…
Krishna Deepak
  • 1,735
  • 2
  • 20
  • 31
27
votes
2 answers

How to generate the start_urls dynamically in crawling?

I am crawling a site which may contain a lot of start_urls, like: http://www.a.com/list_1_2_3.htm I want to populate start_urls like [list_\d+_\d+_\d+\.htm], and extract items from URLs like [node_\d+\.htm] during crawling. Can I use CrawlSpider…
user1215269
  • 271
  • 1
  • 3
  • 3
27
votes
4 answers

Save complete web page (incl css, images) using python/selenium

I am using Python/Selenium to submit genetic sequences to an online database, and want to save the full page of results I get back. Below is the code that gets me to the results I want: from selenium import webdriver URL =…
Max Power
  • 8,265
  • 13
  • 50
  • 91
27
votes
8 answers

What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created…
Tom
  • 8,536
  • 31
  • 133
  • 232
27
votes
5 answers

Python Web Crawlers and "getting" html source code

So my brother wanted me to write a web crawler in Python (self-taught) and I know C++, Java, and a bit of html. I'm using version 2.7 and reading the python library, but I have a few problems 1. httplib.HTTPConnection and request concept to me is…
Dan
  • 554
  • 2
  • 10
  • 18
27
votes
8 answers

Wikipedia text download

I am looking to download full Wikipedia text for my college project. Do I have to write my own spider to download this or is there a public dataset of Wikipedia available online? To just give you some overview of my project, I want to find out the…
Boolean
  • 14,266
  • 30
  • 88
  • 129
26
votes
2 answers

Can Scrapy be replaced by pyspider?

I've been using Scrapy web-scraping framework pretty extensively, but, recently I've discovered that there is another framework/system called pyspider, which, according to it's github page, is fresh, actively developed and popular. pyspider's home…
alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
26
votes
5 answers

How to crawl Facebook based on friendship information?

I'm a graduate student whose research is complex network. I am working on a project that involves analyzing connections between Facebook users. Is it possible to write a crawler for Facebook based on friendship information? I looked around but…
knguyen
  • 2,974
  • 5
  • 25
  • 27