I'm trying to scrape a website encoded in UTF-8 using the httr package, but apparently the content
function of that package only allows for specifying the encoding if you parse the website as text. Unfortunately, I cannot parse it as text, since I would like to use xpath queries on it afterwards. Here's an example:
library(XML)
library(httr)
page <- GET("http://ec.europa.eu/archives/commission_2004-2009/index_en.htm")
test <- content(page, as = "parsed")
# Get a list of names, many of which contain non-standard characters
xpathSApply(test, "//img", xmlGetAttr, "alt")
# This gives the correct encoding, but outputs a character vector,
# on which I cannot use xpath queries
test <- content(page, as = "text", encoding = "utf-8")
Update:
# htmlParse returns a parsed document, but the non-standard characters are
# not properly encoded, i.e. the result is the same whether or not I specify the
# "encoding" argument
test <- htmlParse(page, encoding = "UTF-8")
# Non-standard characters in names still not properly encoded
xpathSApply(test, "//img", xmlGetAttr, "alt")