I want to extract the hyperlinks from this website with different searches (dont be scared that it is in Danish) . The hyperlinks can be found to the right (v15, v14, v13 etc) [example]. The website I try to scrape somehow uses the search results from some kind of a jquery/javascript. This is based on my very limited knowledge in HTML and might be wrong.
I think this fact makes the following code unable to run (I use the "rvest"-package):
sdslink="http://karakterstatistik.stads.ku.dk/#searchText=&term=&block=&institute=null&faculty=&searchingCourses=true&page=1"
s_link = recs %>%
read_html(encoding = "UTF-8") %>%
html_nodes("#searchResults a") %>%
html_attr("href")
I have found a method that works but it requires me to download the pages manually with "right click"+"save as" for each page. This is however unfeasible as I want to scrape a total of 100 pages for hyperlinks.
I have tried to use the jsonlite package combined with httr but I am not able to find the right .json file it seems.
I hope you guys might have a solution, either to get the jsonlite to work, automate the "save as" solution or a third more clever path.