0

Not a duplicate of: Binning Dates in R or Binning time data in R

Context

I am using getMultipleTicks in Rblpapi to pull tick data for a stock (TSLA in this example) over a month:

rawData = getMultipleTicks("tsla us equity", eventType = "TRADE", startTime = as.POSIXlt("2017-03-10 13:30:00"), endTime = as.POSIXlt("2017-04-10 20:00:00"), tz="America/New_York")

> str(rawData)
'data.frame':   1130690 obs. of  3 variables:
 $ times: POSIXct, format: "2017-03-10 08:30:07" ...
 $ value: num  246 246 246 246 246 ...
 $ size : num  58 42 80 5 9 1 4 73 100 941 ...

Objective

This data needs to be converted from this:

Raw Data:

> head(rawData, 5)
   times                 value   size
1  2017-04-10  09:30:00  309     1
2  2017-04-10  09:30:00  309     1
3  2017-04-10  09:30:02  309     1
4  2017-04-10  09:30:02  308     1
5  2017-04-10  09:30:04  309.38  1

To this:

Clean Data:

> head (cleanData, 5)
    times                value   size
1   2017-04-10 09:30:00  309     2
2   2017-04-10 09:30:01          0
3   2017-04-10 09:30:02  308.5   2
4   2017-04-10 09:30:03          0
5   2017-04-10 09:30:04  309.38  1
  1. Missing times (in seconds) are filled in
  2. Prices (values are in VWAP)
  3. Volumes (size) are added together

Compute time is not a concern.

Things I tried

I naively attempted using ?cut but could not achieve any meaningful results per Binning time data in R.

A colleague suggested using a for-loop but not sure how to begin implementing that with the requirements above.

Community
  • 1
  • 1
Robert Tan
  • 634
  • 1
  • 8
  • 21

1 Answers1

1

Here you go. value is VWAP per second.

EDIT Much faster data.table solution:

library(data.table)
times_all <- data.table(times = seq(min(rawData$times), 
                       max(rawData$times), 
                       by = "sec"))
merged <- merge(times_all, rawData, all.x=TRUE)
cleanData <- merged[, list(value=sum(value*size,na.rm=TRUE)/sum(size,na.rm=TRUE),
                      size=sum(size)),
                      by=list(times)]
head(cleanData)
                 times    value size
1: 2017-03-10 08:30:07 246.4942  100
2: 2017-03-10 08:30:08      NaN   NA
3: 2017-03-10 08:30:09      NaN   NA
4: 2017-03-10 08:30:10      NaN   NA
5: 2017-03-10 08:30:11      NaN   NA
6: 2017-03-10 08:30:12      NaN   NA

Original dplyr solution:

library(dplyr)
cleanData <- rawData %>%
  left_join(data.frame(times = seq(min(rawData$times), 
                                   max(rawData$times), 
                                   by = "sec")), .) %>%
  group_by(times) %>%
  summarize(value = sum(value*size,na.rm=TRUE)/sum(size,na.rm=TRUE), 
            size =  sum(size,na.rm=TRUE)) 

head(cleanData)
# A tibble: 6 × 3
                times    value  size
               <dttm>    <dbl> <dbl>
1 2017-03-10 08:30:07 246.4942   100
2 2017-03-10 08:30:08      NaN     0
3 2017-03-10 08:30:09      NaN     0
4 2017-03-10 08:30:10      NaN     0
5 2017-03-10 08:30:11      NaN     0
6 2017-03-10 08:30:12      NaN     0
Pierre Lapointe
  • 16,017
  • 2
  • 43
  • 56
  • Acutally, that's not even necessary. I updated the code to remove the `floor_date` function. I initially thought you wanted the VWAP by minute and this remained in my answer. – Pierre Lapointe Apr 13 '17 at 16:51
  • Okay, this should speed the process up, I believe lubridate slowed things down materially – Robert Tan Apr 13 '17 at 17:17
  • ...but you said: "Compute time is not a concern" :) – Pierre Lapointe Apr 13 '17 at 17:24
  • Indeed, I should have been more specific so that's a lesson learned. Detail: a difference of a few seconds (whether it takes .1s or 5s) is not a concern. But when the compute speed delta is (like in this example) from close to a minute to <5 seconds, then it's interesting :p – Robert Tan Apr 13 '17 at 17:51
  • @RobertTan I just added a 12x faster solution with `data.table` – Pierre Lapointe Apr 13 '17 at 17:52