Questions tagged [web-crawler]

A Web crawler (also known as Web spider) is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or – especially in the FOAF community – Web scutters.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.

Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or especially in the FOAF community – Web scutters.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

Web crawlers indexes

More information

9683 questions
2
votes
1 answer

Why am I getting this (apparently) unusual AttributeError: 'bytes' object has no attribute '_all_strings'? Is there a way to get around it?

I've been searching for a solution to this AttributeError I keep getting, and no solution I've been able to find deals with '_all_strings'. I want to code a web-crawler, but there's a lot of nonsense at the top and bottom of the page, so I'm trying…
AdeDoyle
  • 361
  • 1
  • 14
2
votes
1 answer

StatusUpdaterBolt: Could not find unacked tuple for ID

I have a very simple topology that spouts from an ES index (AggregationSpout), fetches the pages (FetcherBolt) and uses StatusUpdaterBolt to update the ES status to "FETCHED". However, I noticed such warnings in the log files: [WARN] Could not…
EJO
  • 43
  • 4
2
votes
2 answers

Using .write() in Python only writes a single line

So, as an assignment from Thenewboston, I'm trying to grab a block of code from his site and write it to a file. The code grabbing part works just fine, but the writing part doesn't work: import requests from bs4 import BeautifulSoup def…
SirDarknight
  • 83
  • 1
  • 2
  • 11
2
votes
2 answers

Crawl Image using Apache Nutch

I installed Apache Nutch 2.3.1 and Solr 6.5.1 and MongoDB 3.4.7. After I crawl urls that contain many images, in Solr and mongoDB isn't any image and video. I also changed regex-urlfilter.txt file in apache nutch and delete postfix that were…
Sajjad Rostami
  • 303
  • 2
  • 3
  • 12
2
votes
2 answers

Clearing session in Firefox for every request made (Watir issue)

I'm developing a screen scraping robot that uses Watir (ruby) to crawl specific web searches. Watir is used as the search results are delivered in pages, only available via AJAX requests. My issue is now that to perform a new search, the browser…
Jonas Bylov
  • 1,494
  • 2
  • 15
  • 28
2
votes
0 answers

Nutch indexing fails with java.lang.NoSuchFieldError: INSTANCE

I'm using Nutch 1.13 to crawl datas and store them to elasticsearch. I have created some custom parse filter and index filter plugins too. Everything was working fine. I updated elasticsearch to version 5. Then, indexer-elastic plugin stopped…
Abhishek Ramachandran
  • 1,160
  • 1
  • 13
  • 34
2
votes
3 answers

Crawl data on the app store

Does anyone know how AppShopper.com crawl the data on the Apple's app store? Do we have to simulate a browser using automated testing like Watir? Is this the only way to collect the data (e.g., download statistics, price)?
Nimit Pattanasri
  • 1,602
  • 1
  • 26
  • 37
2
votes
1 answer

Scrapy to bypass an alert message with form authentication

Is it possible for Scrapy to crawl an alert message? The link for example, http://domainhere/admin, once loaded in an actual browser, an alert message with form is present to fill up the username and password. Or is there a way to inspect the form…
BLNK
  • 123
  • 1
  • 12
2
votes
1 answer

WebCrawler, only few items have discounted prices - index error

I am new to programming and am trying to build my first little web crawler in python. Goal: Crawling a product list page - scraping brand name, article name, original price and new price - saving in CSV file Status: I've managed to get the brand…
2
votes
5 answers

Web crawler in Ruby: How to achieve the best perfomance?

I'm writing a web-crawler that should be able to parse multiple pages at the same time. I use Nokogiri for parsing which is quiet good and solve all my tasks, but I don't know how to achieve better perfomance. I use threads to make many open-uri…
Arty
  • 5,923
  • 9
  • 39
  • 44
2
votes
2 answers

Scrapy project error: "undefined variable" when actually I have defined this variable

I'm following this tutorial https://www.practicalecommerce.com/Monitor-Competitor-Prices-with-Python-and-Scrapy exactly how is said, step-by-step, but when I get to the part where I run the spider with the command: scrapy crawl massEffect -o…
user7367694
2
votes
1 answer

Get all URLs in a entire site using Scrapy

folks! I'm trying to get all internal URLs in entire site for SEO purposes and i recently discovered Scrapy to help me in this task. But my code always returns a error: 2017-10-11 10:32:00 [scrapy.core.engine] INFO: Spider opened 2017-10-11 10:32:00…
Jodmoreira
  • 85
  • 1
  • 7
2
votes
0 answers

Laravel 5 crawler not loggin in to openweathermap.org

I'm using https://github.com/dweidner/laravel-goutte crawler and trying to login to https://home.openweathermap.org/users/sign_in but response I'm getting is only: Error message: The change you wanted was rejected. Maybe you tried to change…
2
votes
2 answers

Generating a sitemap using python

I'm trying to parse a webpage and create a site map using python. I've written the below piece of code - import urllib2 from bs4 import BeautifulSoup mypage = "http://example.com/" page = urllib2.urlopen(mypage) soup =…
Firstname
  • 355
  • 4
  • 8
  • 16
2
votes
1 answer

Reject url's after fetching based on a condition in Nutch

I want to know whether it's possible to filter the url's that are fetched, based on a condition (for example published date or time). I know that we can filter the url's by regex-urlfilter for fetching. In my case I don't want to index old…
Abhishek Ramachandran
  • 1,160
  • 1
  • 13
  • 34
1 2 3
99
100