You will have to set cookies to make a successful request.
One should check whether the site (sahibinden) allows scraping.
robotstxt::paths_allowed(paths = "https://www.sahibinden.com/satilik", warn = FALSE)
--> robotstxt does not seem to forbid it
- if you update the site after deleting cookies in the browser the site does not allow access anymore and reports unusual behaviour --> indication for counter measures against scraping
- to be sure one should read the terms of usage.
Therefore, i would share the "theoretical" code, but not the required cookie data, which is user dependent anyway.
Full code would read:
library(xml2)
library(httr)
library(magrittr)
library(DT)
url <- "https://www.sahibinden.com/satilik"
YOUR_COOKIE_DATA <- NULL
if(is.null(YOUR_COOKIE_DATA)){
stop("You did not set your cookie data.
Also please check if terms of usage allow the scraping.")
}
response <- url %>% GET(add_headers(.headers = c(Cookie = YOUR_COOKIE_DATA))) %>%
content(type = "text", encoding = "UTF-8")
xpathes <- data.frame(
XPath0 = 'td[2]',
XPath1 = 'td[3]/a[1]',
XPath2 = 'td/span[1]',
XPath3 = 'td/span[2]',
XPath4 = 'td[4]',
XPath5 = 'td[5]',
XPath6 = 'td[6]',
XPath7 = 'td[7]',
XPath8 = 'td[8]'
)
nodes <- response %>% read_html %>% html_nodes(xpath =
"/html/body/div/div/form/div/div/table/tbody/tr"
)
output <- lapply(xpathes, function(xpath){
lapply(nodes, function(node) html_nodes(x = node, xpath = xpath) %>%
{ifelse(length(.), yes = html_text(.), no = NA)}) %>% unlist
})
output %>% data.frame %>% DT::datatable()
Concerning the right to scrape the website data. I try to follow: Should questions that violate API Terms of Service be flagged?. Although, in this case its "potential violation".
Reading cookies programmatically:
I am not sure it is possible to fully skip using the browser: