Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
19
votes
4 answers

Pulling data from a webpage, parsing it for specific pieces, and displaying it

I've been using this site for a long time to find answers to my questions, but I wasn't able to find the answer on this one. I am working with a small group on a class project. We're to build a small "game trading" website that allows people to…
Aloehart
  • 337
  • 1
  • 4
  • 14
19
votes
6 answers

Locally run all of the spiders in Scrapy

Is there a way to run all of the spiders in a Scrapy project without using the Scrapy daemon? There used to be a way to run multiple spiders with scrapy crawl, but that syntax was removed and Scrapy's code changed quite a bit. I tried creating my…
Blender
  • 289,723
  • 53
  • 439
  • 496
18
votes
2 answers

Can I use WGET to generate a sitemap of a website given its URL?

I need a script that can spider a website and return the list of all crawled pages in plain-text or similar format; which I will submit to search engines as sitemap. Can I use WGET to generate a sitemap of a website? Or is there a PHP script that…
Salman A
  • 262,204
  • 82
  • 430
  • 521
18
votes
5 answers

Distributed Web crawling using Apache Spark - Is it Possible?

An interesting question asked of me when I attended one interview regarding web mining. The question was, is it possible to crawl the Websites using Apache Spark? I guessed that it was possible, because it supports distributed processing capacity of…
New Man
  • 219
  • 1
  • 2
  • 6
18
votes
5 answers

Web crawler that can interpret JavaScript

I want to write a web crawler that can interpret JavaScript. Basically its a program in Java or PHP that takes a URL as input and outputs the DOM tree which is similar to the output in Firebug HTML window. The best example is Kayak.com where you can…
user320662
  • 181
  • 1
  • 1
  • 3
18
votes
9 answers

Does solr do web crawling?

I am interested to do web crawling. I was looking at solr. Does solr do web crawling, or what are the steps to do web crawling?
18
votes
3 answers

Is it possible for Scrapy to get plain text from raw HTML data?

For example: scrapy shell http://scrapy.org/ content = hxs.select('//*[@id="content"]').extract()[0] print content Then, I get the following raw HTML code:

Welcome to Scrapy

What is Scrapy?

Scrapy…

inix
  • 485
  • 1
  • 5
  • 12
18
votes
2 answers

Scrapy CrawlSpider doesn't crawl the first landing page

I am new to Scrapy and I am working on a scraping exercise and I am using the CrawlSpider. Although the Scrapy framework works beautifully and it follows the relevant links, I can't seem to make the CrawlSpider to scrape the very first link (the…
gpanterov
  • 1,365
  • 2
  • 15
  • 25
18
votes
5 answers

How to extract URLs from an HTML page in Python

I have to write a web crawler in Python. I don't know how to parse a page and extract the URLs from HTML. Where should I go and study to write such a program? In other words, is there a simple python program which can be used as a template for a…
user2189704
  • 223
  • 1
  • 2
  • 3
17
votes
5 answers

Creating a generic scrapy spider

My question is really how to do the same thing as a previous question, but in Scrapy 0.14. Using one Scrapy spider for several websites Basically, I have GUI that takes parameters like domain, keywords, tag names, etc. and I want to create a generic…
user1284717
  • 171
  • 1
  • 4
17
votes
3 answers

Submit data via web form and extract the results

My python level is Novice. I have never written a web scraper or crawler. I have written a python code to connect to an api and extract the data that I want. But for some the extracted data I want to get the gender of the author. I found this web…
add-semi-colons
  • 18,094
  • 55
  • 145
  • 232
17
votes
3 answers

Should I create pipeline to save files with scrapy?

I need to save a file (.pdf) but I'm unsure how to do it. I need to save .pdfs and store them in such a way that they are organized in a directories much like they are stored on the site I'm scraping them off. From what I can gather I need to make…
John Lotacs
  • 1,184
  • 4
  • 20
  • 34
17
votes
4 answers

Ruby on Rails, How to determine if a request was made by a robot or search engine spider?

I've Rails apps, that record an IP-address from every request to specific URL, but in my IP database i've found facebook blok IP like 66.220.15.* and Google IP (i suggest it come from bot). Is there any formula to determine an IP from request was…
Agung Prasetyo
  • 4,353
  • 5
  • 29
  • 37
17
votes
8 answers

How to Stop the page loading in firefox programmatically?

I am running several tests with WebDriver and Firefox. I'm running into a problem with the following command: WebDriver.get(www.google.com); With this command, WebDriver blocks till the onload event is fired. While this can normally takes seconds,…
ArisRe82
  • 535
  • 1
  • 7
  • 16
17
votes
2 answers

How to make a polygon radar (spider) chart in python

import matplotlib.pyplot as plt import numpy as np labels=['Siege', 'Initiation', 'Crowd_control', 'Wave_clear', 'Objective_damage'] markers = [0, 1, 2, 3, 4, 5] str_markers = ["0", "1", "2", "3", "4", "5"] def make_radar_chart(name, stats,…
David Ko
  • 304
  • 1
  • 3
  • 11