Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
2
votes
1 answer

selenium implicitly wait doesn't work

This is the first time I use selenium and headless browser as I want to crawl some web page using ajax tech. The effect is great, but for some case it takes too much time to load the whole page(especially when some resource is unavailable),so I have…
2
votes
2 answers

Web crawler in Rails to extract links and download files from web page

I'm using RoR, I will specify a link to a web page in my application and here are the things that I want to do (1) I want to extract all the links in the web page (2) Find if they are links to pdf file(basically a pattern match) (3)I want to…
theReverseFlick
  • 5,894
  • 8
  • 32
  • 33
2
votes
2 answers

Is there any way to get element info inside the shadow root with Selenium?

I am trying to scrape some info from a website with java and selenium. However, because of the shadow-root I cannot reach any web element. When i try to get html, it returns empty array. Is there any way to reach the info inside shadow-root or is it…
C.Aglar
  • 1,290
  • 2
  • 14
  • 30
2
votes
1 answer

How to increase the request page time in python 3 while scraping web pages?

I have started scraping reviews from e-commerce platform and perform sentiment analysis and share it with people on my blog to make the life of people easier and understand everything about the product in just one article. I am using python packages…
Prateek
  • 185
  • 1
  • 3
  • 12
2
votes
2 answers

ASP.NET MVC - Crawler - doesn't encode \n

Long Description filed has \n in it to give a line break. It works perfectly on default browser mode but doesn't encode for Crawler and AMP page Tried:

monda
  • 3,809
  • 15
  • 60
  • 84
2
votes
1 answer

Scraping the source code using VBA-Macros

I need to crawl the price values from the price comparison website (product link: https://www.toppreise.ch/prod_488002.html). I am not able to scrape. see the highlighted price in the image that I want to capture: Please help me how to crawl this…
Prasath
  • 21
  • 1
  • 2
2
votes
1 answer

how to use Java read Unicode range from font file

I have a ttf file which contains Unicode and the corresponding font. As the figure shows: The red box is the Unicode, and the text above it is the corresponding font. How could I extract the Unicode from the font file?
DuFei
  • 447
  • 6
  • 20
2
votes
2 answers

How to tell Google bot that certain links no longer exist

In the first days of a website, I made a mistake in the generation of some links; following them outputs a database error. Google bot has attempted to follow those links and now they appear as crawl errors in webmasters tools. Although I have since…
Panagiotis Panagi
  • 9,927
  • 7
  • 55
  • 103
2
votes
1 answer

Unable to fix VBA crawling error on webpage

Website: http://www.cookcountypropertyinfo.com/default.aspx. I wanted to automate the process of inputting values in 'BY PIN' section and then submit 'Search' button. Below mentioned code fills into 'BY PIN' section but it fails the validation…
Sonu Kumar
  • 108
  • 6
2
votes
0 answers

How to crawl ASP.NET web applications

I'm writing an application which needs to communicate with a ASP.NET website that doesn't have an API. I'm using Python 3.5.2 with Requests 2.18.4 to achieve my purpose. The problem is the site is using _dopostback() so I achieved my goal using…
2
votes
1 answer

Issue with JWikiDocs for Wikipedia Crawling

I'm trying to use JWikiDocs as a focused crawler for downloading Wikipedia pages as text documents. I'm executing it within a VirtualBox running Ubuntu 17.10.1. I have cleaned and compiled JWikiDocs using $ make clean and $ make all Then as per…
Scott
  • 1,863
  • 2
  • 24
  • 43
2
votes
1 answer

Golang web spider with pagination processing

I'm working on a golang web crawler that should parse the search results on some specific search engine. The main difficulty - parsing with concurrency, or rather, in processing pagination such as ← Previous 1 2 3 4 5 ... 34 Next →. All things…
himei
  • 23
  • 3
2
votes
0 answers

scrapy always Starting new HTTP connection after crawl

After my spider has crawled all the urls,scrapy doesn't stoped,how to stop it after crawl finished? The start url is http://http://192.168.139.28/dvwa. After my spider finished,it seems the spider is always Starting new HTTP connection (1):…
quanyechavs huo
  • 125
  • 1
  • 13
2
votes
1 answer

a crawler that builds the link tree form a single website

I want to know if there are any outsource solutions for a crawler that will parse only the links and pages form a given website, and will output: 1.The link tree 2.The pages (where necessary) thanks!
dana
  • 5,168
  • 20
  • 75
  • 116
2
votes
1 answer

Crawler leaves lots of ESTABLISHED TCP sockets to some servers

I've got a Java web crawler. I've noticed that for a small number of servers I crawl I am left with a large number of ESTABLISHED sockets: joel@bohr:~/tmp/test$ lsof -p 6760 | grep TCP java 6760 joel 105u IPv6 96546 0t0 TCP…
Joel
  • 29,538
  • 35
  • 110
  • 138
1 2 3
99
100