Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
2
votes
1 answer

Crawl a website that has an Ajax table using R

I'm new to R and have been trying to crawl this website: http://rera.rajasthan.gov.in/ProjectSearch I'm trying to get the list of all projects in the table including the url to the "View" button but have been failing miserably. The table appears…
Megh
  • 81
  • 5
2
votes
2 answers

Using Youtube API in place of a Youtube Crawler

I'm looking to crawl Youtube videos within a given timeframe, e.g. return a list of all (or a fraction of) the videos posted between Jan 14th and Jan 22nd. Does anyone have experience using the youtube data API…
C. Reed
  • 2,382
  • 7
  • 30
  • 35
2
votes
0 answers

How to crawl a list of url in python web crawler?

I have a list of url like this: url = ['url_1','url_2', 'url_3'] and there are 300 elements in the list. As their HTML structure is similar, I have written a function to crawl it and extract the information that I need: def…
Kelvinyu1117
  • 509
  • 1
  • 5
  • 14
2
votes
3 answers

Can Search Engines bots crawl pages requiring login?

If a homepage on a website has a content if a user is not logged in and another content when the user login, would a search engine bot be able to crawl the user specific content? If they are not able to crawl, then I can duplicate the content from…
Nicolas de Fontenay
  • 2,170
  • 5
  • 26
  • 51
2
votes
2 answers

How to extract content inside html tag attr with python?

I am running a scrapy project. I need to extract a content within a tag attribute like this: In this case would be the date within the content attribute. So far I was only able to extract content…
Rodrigo
  • 137
  • 2
  • 8
2
votes
2 answers

Scrapy can't find form on page

I'm trying to write a spider that will automatically log in to this website. However, when I try using scrapy.FormRequest.from_response in the shell I get the error: No
element found in <200…
kreesh
  • 73
  • 1
  • 11
2
votes
1 answer

redirect htaccess for facebook crawler

I've created a SPA app with vuejs. Everything was perfect until I want to share dynamic content on Facebook. After research I found that I need another file (In my case php file) when I fill the meta-tags for facebook crawler. In htaccess I'm trying…
Nerxhan
  • 311
  • 2
  • 10
2
votes
1 answer

How to handle temporary errors which are not signaled by http status code?

I am writing a crawler using Scrapy (Python) and don't know how to handle certain errors. I have got a website which sometimes returns an empty body or a normal page with an error message. Both replies come with a standard 200 HTTP status code. What…
C. Yduqoli
  • 1,706
  • 14
  • 18
2
votes
1 answer

Puppeteer not chaning my IP

Basically, any proxy server (for example from this website https://www.socks-proxy.net/) will not change my IP const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch({ args:…
Michal
  • 4,952
  • 8
  • 30
  • 63
2
votes
2 answers

Stormcrawler slow with high latency crawling 300 domains

I am currently struggling with this issue since about 3 month. The Crawler seems to fetch pages every 10 minutes but seems to do nothing in between. With an overall very slow throughput. I am crawling 300 domains in parallel. Which should make…
2
votes
1 answer

grabbing the content inside the html content using Python

The Chinese website here mainly describes the information of one company. Since there are many pages containing similar contents, I decided to learn data crawler in Python. Basic code import requests from bs4 import BeautifulSoup page =…
Han Zhengzu
  • 3,694
  • 7
  • 44
  • 94
2
votes
0 answers

Nested rules for CrawlSpider in Scrapy

Totally new to scrapy and crawlspider.. I'm stuck on how to define rules for nested crawling?? I've a rule defined as Rule(LinkExtractor( allow=(), restrict_xpaths='//div[@class="sch-main-menu-sub-links-left"]' ),…
F. Shahid
  • 101
  • 10
2
votes
1 answer

Data Scraping; Extracting links from a table using rvest

I am trying to extract all the player links from this table: https://www.footballdb.com/players/players.html?letter=A Here is what my code looks like: library(rvest) url <- "https://www.footballdb.com/players/players.html?letter=A" webpage <-…
ZBauc
  • 163
  • 2
  • 7
2
votes
1 answer

Reading html from local html file with HTMLUNIT

I am trying to load a local html file which I have downloaded. Does anybody know how to do this? I am currently getting statuscode[404]. this is how I am doing it HtmlPage…
2
votes
1 answer

NodeJS Web Crawling With node-crawler or simplecrawler

I am new to web crawling and I need some pointers about these two Node JS crawlers. Aim: My aim is to crawl a website and obtain ONLY the internal (local) URLs within that domain. I am not interested in any page data or scraping. Just the URLs. My…
Machiavelli
  • 411
  • 4
  • 15