0

I create some xts objects, save them as csv files, upload them to google cloud platform,..., create a Python dictionary, download the dictionary and get the results (several original files at once over a REST API) back in R with Rcurl.

I want to create the original xts object again, but can't properly parse the end result.

Here is the data flow:

One of the csv files looks like this (they all have the same structure):

csv files

The result in R is taken with:

library(RCurl)
all_files <- getURL("https://....")
all_parsed <- jsonlite::fromJSON(all_files)

,is a list, and looks like: result with Rcurl

If I print the result with cat(all_parsed$2020-09-24.csv) , it has just the right structure for xts:

output with cat(all_parsed$2020-09-24.csv)

But I can't really use the data because of all the \n \" etc

I could try it with strsplit(...), but it's a lot of work.

Is there any better way to parse the result?

Sorry I can't provide better explanations/reproducible code - I could send the URL for Rcurl to some of you by mail.

Thank you!

4554888
  • 87
  • 1
  • 11

2 Answers2

0

Like this?

library(stringr)

# Example string
ex_str <- "\"Index\",\"E2202\"\n\"2020-09-04\",NA\n\"2020-12-02\",2.7"

# First, split on the \n which represents a new line 
ex_str <- str_split(ex_str, "\n")[[1]]

# Then, drop quotation marks
str_replace_all(ex_str, "\"", "")

[1] "Index,E2202"    "2020-09-04,NA"  "2020-12-02,2.7"
big parma
  • 337
  • 1
  • 13
0

Yes, thank you. I will complete your code with:

eex_str <- str_replace_all(ex_str, "\"", "")[-length(str_replace_all(ex_str, "\"", ""))]
index_string <- unlist(str_split(eex_str, ","))[c(TRUE,FALSE)] # 
data_string <- unlist(str_split(eex_str, ","))[c(FALSE,TRUE)] # both length 6
library(xts)
suppressWarnings(recreated_xts <- xts(x = as.numeric(data_string[-1]), order.by = as.Date(index_string[-1] ))) # supress  NAs introduced by coercion
colnames(recreated_xts) <- data_string[1]
recreated_xts
4554888
  • 87
  • 1
  • 11