6

I'd like to scrape (using rvest) a website that asks users to consent to set cookies. If I just scrape the page, rvest only downloads the popup. Here is the code:

library(rvest)
content <- read_html("https://karriere.nrw/stellenausschreibung/dba41541-8ed9-4449-8f79-da3cda0cc07c") 
content %>% html_text()

The result seems to be the content of the popup window asking for consent.

Is there a way to ignore or accept the popup or to set a cookie in advance so I can access the main text of the site?

Dominik Vogel
  • 196
  • 13

2 Answers2

4

As suggested, the website is dynamic, which means it is constructed from a javascript. Usually it is very time consuming to reconstruct (or straight impossible) from the .js file how this is done, but in this case, you can actually see in the "network analysis" function of your browser, that there is a non-hidden api that serves the information that you want. This is the request to api.karriere.nrw.

Hence you can use the uuid (identifier in the database) of your url and make a simple GET request to the api and just go straight to the source without rendering through RSelenium, which is extra-time and resources.

Be friendly though, and send them some kind of way to contact you, so they can tell you to stop.

library(tidyverse)
library(httr)
library(rvest)
library(jsonlite)
headers <- c("Email" = "johndoe@company.com")

### assuming the url is given and always has the same format
url <- "https://karriere.nrw/stellenausschreibung/dba41541-8ed9-4449-8f79-da3cda0cc07c"

### extract identifier of job posting
uuid <- str_split(url,"/")[[1]][5]

### make api call-address
api_url <- str_c("https://api.karriere.nrw/v1.0/stellenausschreibungen/",uuid)

### get results
response <- httr::GET(api_url,
                    httr::add_headers(.headers = headers))
result <- httr::content(response, as = "text") %>% jsonlite::fromJSON()
Datapumpernickel
  • 606
  • 6
  • 14
1

That website isn't static, so I don't think there's a way to scrape it using rvest (I would love to be proved wrong though!); an alternative is to use RSelenium to 'click' the popup then scrape the rendered content, e.g.

library(tidyverse)
library(rvest)
#install.packages("RSelenium")
library(RSelenium)

driver <- rsDriver(browser=c("firefox"))
remote_driver <- driver[["client"]]
remote_driver$navigate("https://karriere.nrw/stellenausschreibung/dba41541-8ed9-4449-8f79-da3cda0cc07c")
webElem <- remote_driver$findElement("id", "popup_close")
webElem$clickElement()
out <- remote_driver$findElement(using = "class", value="css-1nedt8z")
scraped <- out$getElementText()
scraped

Edit: Supporting info concerning the "non-static hypothesis":

If you check how the site is rendered in the browser you will see that loading the "base document" only is not sufficient, but you would require supporting javascript. (Source: Chrome)

enter image description here

Tonio Liebrand
  • 17,189
  • 4
  • 39
  • 59
jared_mamrot
  • 22,354
  • 4
  • 21
  • 46
  • 1
    i made an edit supporting your "non-static" hypothesis if you dont mind. Its not worth a new answer, i guess. – Tonio Liebrand Apr 09 '21 at 08:29
  • 1
    ah and one could add that the site actually does add any cookies for that page. – Tonio Liebrand Apr 09 '21 at 08:59
  • Thank you @TonioLiebrand!! That info is absolutely worth including. I don't mind at all; I appreciate it. – jared_mamrot Apr 10 '21 at 08:30
  • 1
    Thanks a lot to both of you! Although there seems to be a way to use an API (answer by @Datapumpernickel), your answer helped me to understand how I can solve such a problem in the future. – Dominik Vogel Apr 15 '21 at 08:00