0

I am writting a code in R to read a list of CSV files from 6000 URLs. data <- read.csv(url) If R can not acces a URL, the code execution stops. Anyone knows hot to avoid this error stop in R?

I have been looking for any argument for the read.csv functionm but probably there is a function.

Sergio
  • 1
  • Welcome to Stackoverflow. I think you're looking for [try catch](https://stackoverflow.com/questions/58126097/trycatch-function-works-on-most-non-existent-urls-but-it-does-not-work-in-at-l), if this essentially describes what you're doing and how to step through bad/non urls. – Chris Apr 03 '22 at 00:02
  • Please provide enough code so others can better understand or reproduce the problem. – Community Apr 03 '22 at 06:28

1 Answers1

1

Simply use tryCatch to catch the error inside the loop but continue on to other iterations:

# DEFINED METHOD, ON ERROR PRINTS MESSAGE AND RETURNS NULL
read_data_from_url <- function(url) {
    tryCatch({
        read.csv(url)
    }, error = function(e) {
        print(e)
        return(NULL)
    })
}

# NAMED LIST OF DATA FRAMES
df_list <- sapply(
    list_of_6000_urls, read_data_from_url, simplify = FALSE
)

# FILTER OUT NULLS (PROBLEMATIC URLS)
df_list <- Filter(NROW, df_list)
Parfait
  • 104,375
  • 17
  • 94
  • 125