Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
13
votes
2 answers

How to get all links from the DOM?

According to https://github.com/GoogleChrome/puppeteer/issues/628, I should be able to get all links from < a href="xyz" > with this single line: const hrefs = await page.$$eval('a', a => a.href); But when I try a simple: console.log(hrefs) I only…
Vega
  • 2,661
  • 5
  • 24
  • 49
13
votes
1 answer

Do you know bot LTX71? What is it doing? Is it spam?

There is a bot/spider crawling my websites very fast. The useragent is 'ltx71 - (http://ltx71.com/)' and it has serval ips: 52.3.127.144 and 52.3.105.23 On the website it says just this: LTX71 We continuously scan the internet for security…
Bo Pennings
  • 945
  • 1
  • 10
  • 20
13
votes
2 answers

crawl dynamic web page using htmlunit

I am crawling data using HtmlUnit from a dynamic webpage, which uses infinite scrolling to fetch data dynamically, just like facebook's newsfeed. I used the following sentence to simulate the scrolling down…
13
votes
6 answers

How do I let search crawlers properly index pages with infinite scroll?

I have a website on which I implement infinite scroll: when a user reaches the end of a page, an AJAX call is made and new content is attached to the bottom of the page. This, however, means that all content after the first "page break" is…
Stas Bichenko
  • 13,013
  • 8
  • 45
  • 83
13
votes
4 answers

Simple web crawler in C#

I have created a simple web crawler but I want to add the recursion function so that every page that is opened I can get the URLs in this page, but I have no idea how I can do that and I want also to include threads to make it faster. Here is my…
Khaled Mohamed
  • 217
  • 1
  • 7
  • 15
12
votes
3 answers

How to get a web page's source code from Java

I just want to retrieve any web page's source code from Java. I found lots of solutions so far, but I couldn't find any code that works for all the links below:…
brtb
  • 2,201
  • 6
  • 34
  • 53
12
votes
4 answers

How to crawl a website/extract data into database with python?

I'd like to build a webapp to help other students at my university create their schedules. To do that I need to crawl the master schedules (one huge html page) as well as a link to a detailed description for each course into a database, preferably…
McEnroe
  • 633
  • 3
  • 7
  • 17
12
votes
10 answers

Crawling The Internet

I want to crawl for specific things. Specifically events that are taking place like concerts, movies, art gallery openings, etc, etc. Anything that one might spend time going to. How do I implement a crawler? I have heard of Grub (grub.org -> Wikia)…
Toddly
  • 879
  • 2
  • 9
  • 14
12
votes
13 answers

best library to do web-scraping

I would like to get data from from different webpages such as addresses of restaurants or dates of different events for a given location and so on. What is the best library I can use for extracting this data from a given set of sites?
gyurisc
  • 11,234
  • 16
  • 68
  • 102
12
votes
2 answers

Nutch No agents listed in 'http.agent.name'

Exception in thread "main" java.lang.IllegalArgumentException: Fetcher: No agents listed in 'http.agent.name' property. at org.apache.nutch.fetcher.Fetcher.checkConfiguration(Fetcher.java:1166) at…
LinuxBill
  • 415
  • 1
  • 8
  • 19
12
votes
12 answers

Counting li items from a html file using php

I have an HTML file that contains many many JUST "li" tags no head and body tag and any thing else. I want to count them using PHP. how can I do this? However, I tried this: $dom = new DOMDocument(); DOMDocument::loadHTML($tmp_file); $count =…
amdev
  • 6,703
  • 6
  • 42
  • 64
12
votes
4 answers

Extracting Site data through Web Crawler outputs error due to mis-match of Array Index

I been trying to extract site table text along with its link from the given table to (which is in site1.com) to my php page using a web crawler. But unfortunately, due to incorrect input of Array index in the php code, it came error as…
harishk
  • 418
  • 5
  • 21
12
votes
5 answers

Web crawler in ruby

What is your recommendation of writing a web crawler in Ruby? Any lib better than mechanize?
pierrotlefou
  • 39,805
  • 37
  • 135
  • 175
12
votes
1 answer

Do bots/spiders clone public git repositories?

I host a few public repositories on GitHub which occasionally receive clones according to traffic graphs. While I'd like to believe that many people are finding my code and downloading it, the nature of the code in some of them makes me suspect that…
Sean
  • 1,346
  • 13
  • 24
12
votes
5 answers

How to specify parameters on a Request using scrapy

How do I pass parameters to a a request on a url like this: site.com/search/?action=search&description=My Search here&e_author= How do I put the arguments on the structure of a Spider Request, something like this exemple: req =…
Gh057
  • 137
  • 1
  • 1
  • 4