I would like to transform the following data frame into a wide format hourly timeseries with zero as padding if there is no value. Essentially I want to transform a dataframe with start/endperiod into a hourly timeseries:
structure(list(AffectedAssetMask = structure(c(1L, 1L, 2L, 3L,
3L, 3L, 4L, 5L, 5L, 5L, 5L, 5L, 5L), .Label = c("A_1", "A_2",
"A_3", "A_4", "A_5"), class = "factor"), EventStart = structure(c(1525737600,
1525723200, 1525982400, 1525694400, 1525651200, 1525636800, 1525723200,
1525662000, 1525658400, 1525654800, 1525651200, 1525662000, 1525662000
), class = c("POSIXct", "POSIXt"), tzone = "Europe/Rome"), EventStop = structure(c(1525748399,
1525737599, 1526155199, 1525723199, 1525694399, 1525651199, 1525809599,
1525665599, 1525661999, 1525658399, 1525654799, 1525665599, 1525665599
), class = c("POSIXct", "POSIXt"), tzone = "Europe/Rome"), UnavailableCapacity = c(150.95,
154.742, 115, 172.342, 198.163, 172.342, 115, 197, 281, 330,
340, 197, 197)), row.names = c(NA, -13L), class = "data.frame")
output:
AffectedAssetMask EventStart EventStop UnavailableCapacity
A_1 2018-05-08 02:00:00 2018-05-08 04:59:59 150.950
A_1 2018-05-07 22:00:00 2018-05-08 01:59:59 54.742
A_2 2018-05-10 22:00:00 2018-05-12 21:59:59 115.000
A_3 2018-05-07 14:00:00 2018-05-07 21:59:59 172.342
A_3 2018-05-07 02:00:00 2018-05-07 13:59:59 198.163
A_3 2018-05-06 22:00:00 2018-05-07 01:59:59 172.342
A_4 2018-05-07 22:00:00 2018-05-08 21:59:59 115.000
A_5 2018-05-07 05:00:00 2018-05-07 05:59:59 197.000
A_5 2018-05-07 04:00:00 2018-05-07 04:59:59 281.000
A_5 2018-05-07 03:00:00 2018-05-07 03:59:59 330.000
A_5 2018-05-07 02:00:00 2018-05-07 02:59:59 340.000
A_5 2018-05-07 05:00:00 2018-05-07 05:59:59 197.000
A_5 2018-05-07 05:00:00 2018-05-07 05:59:59 197.000
The last two entries are not my error, but sometimes the data includes double entries with the same value, but I think with "unique" I should be able to filter them out.
more data info:
> str(df1)
'data.frame': 13 obs. of 4 variables:
$ AffectedAssetMask : Factor w/ 5 levels "A_1","A_2","A_3",..: 1 1 2 3 3 3 4 5 5 5 ...
$ EventStart : POSIXct, format: "2018-05-08 02:00:00" "2018-05-07 22:00:00" "2018-05-10 22:00:00" "2018-05-07 14:00:00" ...
$ EventStop : POSIXct, format: "2018-05-08 04:59:59" "2018-05-08 01:59:59" "2018-05-12 21:59:59" "2018-05-07 21:59:59" ...
$ UnavailableCapacity: num 151 155 115 172 198 ...
I need the data in a wide format style and instead of having the EventStart/EventStop period a hourly index with the same value for each hour and if its not included zero as value.
To transform it into wide format:
library(reshape2)
#cast long-format data into wide-format
dcast(df1, EventStart + EventStop ~ AffectedAssetMask, value.var = "UnavailableCapacity", fun.aggregate = mean)
The format is fine, but instead of EventStart/EventStop I need a hourly continuous timeseries with zero as "fill".
I'm able to create an empty dataframe in my desired format:
startPeriod<-as.POSIXct(strptime("01/05/2018 00:00:00", "%d/%m/%Y %H:%M:%S"))
endPeriod<-as.POSIXct(strptime("30/05/2018 00:00:00", "%d/%m/%Y %H:%M:%S"))
#create monthly hourly sequence
dataseq <- seq(startPeriod,endPeriod,by = 3600)
list<-unique(df1$AffectedAssetMask)
NROW<-as.numeric(length(dataseq))
OverallDataFrame<-as.data.frame(matrix(0,NROW,length(list)))
names(OverallDataFrame)<-list
rownames(OverallDataFrame) <- dataseq
The target output table should be something like this:
A_1 A_2 A_3 A_4 A_5
2018-05-06 18:00:00 0 0 0 0 0
2018-05-06 19:00:00 0 0 0 0 0
2018-05-06 20:00:00 0 0 0 0 0
2018-05-06 21:00:00 0 0 0 0 0
2018-05-06 22:00:00 0 0 172.342 0 0
2018-05-06 23:00:00 0 0 172.342 0 0
2018-05-06 00:00:00 0 0 172.342 0 0
2018-05-07 01:00:00 0 0 172.342 0 0
2018-05-07 02:00:00 0 0 198.163 340 0
2018-05-07 03:00:00 0 0 198.163 0 330
2018-05-07 04:00:00 0 0 198.163 0 281
2018-05-07 05:00:00 0 0 198.163 0 197
The csv file where the data is from, is around 8-9 Megabyte, a for loop takes too long. Some people suggest fuzzyjoin and fuzzy_left_join with the "match_fun" parameter, other a function to check if the index is in the intervall, unfortunately I'm kinda lost.
Thank you very much!