1

I have the following table:

# A tibble: 40 x 5
# Groups:   stock [1]
   stock     date         mkt_cap  week  returns
   <chr>    <date>         <dbl> <int>    <dbl>
 1 A        2019-03-04 10522834.    NA  NA     
 2 A        2019-03-05 11659707.    NA   9.70  
 3 A        2019-03-06 11464531.    NA  -2.25  
 4 A        2019-03-07 12217241.    NA   5.80  
 5 A        2019-03-08 11619351.     1  -5.57  
 6 A        2019-03-09 11578687.    NA  -0.899 
 7 A        2019-03-10 11658368.    NA   0.141 
 8 A        2019-03-11 12722921.    NA   8.20  
 9 A        2019-03-12 15429934.    NA  18.8   
10 A        2019-03-13 16801600.    NA   7.98  
11 A        2019-03-14 17898334.    NA   5.79  
12 A        2019-03-15 18492686.     2   2.74  
13 A        2019-03-16 20686683.    NA  10.7   
14 A        2019-03-17 22299970.    NA   6.98  
15 A        2019-03-18 22924182.    NA   2.24  
16 A        2019-03-19 24174351.    NA   4.79  
17 A        2019-03-20 24661467.    NA   1.48  
18 A        2019-03-21 23351810.    NA  -5.97  
19 A        2019-03-22 27826601.     3  17.0   
20 A        2019-03-23 30670482.    NA   9.22  
21 A        2019-03-24 32802772.    NA   6.21  
22 A        2019-03-25 31778387.    NA  -3.68  
23 A        2019-03-26 33237006.    NA   3.99  
24 A        2019-03-27 34971479.    NA   4.59  
25 A        2019-03-28 36774005.    NA   4.53  
26 A        2019-03-29 37594815.     4   1.71  
27 A        2019-03-30 38321816.    NA   1.42  
28 A        2019-03-31 35167070.    NA  -9.08  
29 A        2019-04-01 35625396.    NA   0.808 
30 A        2019-04-02 35764747.    NA  -0.0940
31 A        2019-04-03 28316242.    NA -23.8   
32 A        2019-04-04 26124803.    NA  -8.53  
33 A        2019-04-05 30390295.     5  14.6   
34 A        2019-04-06 28256485.    NA  -7.76  
35 A        2019-04-07 29807837.    NA   4.87  
36 A        2019-04-08 30970364.    NA   3.36  
37 A        2019-04-09 30470093.    NA  -2.10  
38 A        2019-04-10 30860276.    NA   0.806 
39 A        2019-04-11 27946472.    NA -10.4   
40 A        2019-04-12 27662766.     6  -1.48  

Over this table, I want to run a rolling regression where the rolling regression contains the past month of data. I want to run these rolling regressions over the weeks. That is, over week==1, week ==2 etc., where we use the past month of data. The regression should be lm(return~mkt_cap). I have tried a number of things using the slide_period() function, however, this did not work out for me. For example, I have tried to run tbl.data %>% group_by(stock, week) %>% slide_period(date, date, "month", ~.x, .before = 1). There are some gaps in my data, therefore I prefer a solution that considers the date.

Could someone help me out? Kind regards.

Cardinal
  • 208
  • 2
  • 13
  • You might find this question helpful: [The rolling regression in R using roll apply](https://stackoverflow.com/q/23162937/) – Ian Campbell Apr 21 '21 at 13:48
  • One of the problems with roll apply is that it doesn't consider "gaps" in the data. So that is not my preferred solution. – Cardinal Apr 21 '21 at 13:51
  • Perhaps you could update your question with additional details about your expected output and your consideration of why `rollapply` is not your preferred solution. Otherwise, someone will come by and spend their time providing that solution. – Ian Campbell Apr 21 '21 at 13:53
  • 1
    Good idea. I'll add. – Cardinal Apr 21 '21 at 13:56

3 Answers3

1

I would use a tidyverse rowwise approach.

Not clear to me is how models should be created by week and go back to the last month. In the approach below I calculate max_date per week and from this I go back 30 days.

# setup
library(tidyverse)
library(lubridate)

dat <- tribble(~stock, ~date, ~mkt_cap, ~week, ~returns,
"A",        "2019-03-04", 10522834.,    NA,       NA,
"A",        "2019-03-05", 11659707.,    NA,     9.70,
"A",        "2019-03-06", 11464531.,    NA,    -2.25,  
"A",        "2019-03-07", 12217241.,    NA,     5.80,  
"A",        "2019-03-08", 11619351.,     1,    -5.57,  
"A",        "2019-03-09", 11578687.,    NA,   -0.899,
"A",        "2019-03-10", 11658368.,    NA,    0.141, 
"A",        "2019-03-11", 12722921.,    NA,     8.20,  
"A",        "2019-03-12", 15429934.,    NA,     18.8,   
"A",        "2019-03-13", 16801600.,    NA,     7.98,  
"A",        "2019-03-14", 17898334.,    NA,     5.79,  
"A",        "2019-03-15", 18492686.,     2,     2.74,  
"A",        "2019-03-16", 20686683.,    NA,     10.7,   
"A",        "2019-03-17", 22299970.,    NA,     6.98,  
"A",        "2019-03-18", 22924182.,    NA,     2.24,  
"A",        "2019-03-19", 24174351.,    NA,     4.79,  
"A",        "2019-03-20", 24661467.,    NA,     1.48,  
"A",        "2019-03-21", 23351810.,    NA,    -5.97,  
"A",        "2019-03-22", 27826601.,     3,     17.0,   
"A",        "2019-03-23", 30670482.,    NA,     9.22,  
"A",        "2019-03-24", 32802772.,    NA,     6.21,  
"A",        "2019-03-25", 31778387.,    NA,    -3.68,  
"A",        "2019-03-26", 33237006.,    NA,     3.99,  
"A",        "2019-03-27", 34971479.,    NA,     4.59,  
"A",        "2019-03-28", 36774005.,    NA,     4.53,  
"A",        "2019-03-29", 37594815.,     4,     1.71,  
"A",        "2019-03-30", 38321816.,    NA,     1.42,  
"A",        "2019-03-31", 35167070.,    NA,    -9.08,  
"A",        "2019-04-01", 35625396.,    NA,    0.808, 
"A",        "2019-04-02", 35764747.,    NA,  -0.0940,
"A",        "2019-04-03", 28316242.,    NA,    -23.8,   
"A",        "2019-04-04", 26124803.,    NA,    -8.53,  
"A",        "2019-04-05", 30390295.,     5,     14.6,   
"A",        "2019-04-06", 28256485.,    NA,    -7.76,  
"A",        "2019-04-07", 29807837.,    NA,     4.87,  
"A",        "2019-04-08", 30970364.,    NA,     3.36,  
"A",        "2019-04-09", 30470093.,    NA,    -2.10,  
"A",        "2019-04-10", 30860276.,    NA,    0.806, 
"A",        "2019-04-11", 27946472.,    NA,    -10.4,   
"A",        "2019-04-12", 27662766.,     6,    -1.48) %>% 
  mutate(date = as.Date(date)) %>% 
  fill(week, .direction = "up") 

# summarised data.frame by week with min and max date
dat2 <- dat %>% 
  group_by(week) %>% 
  summarise(max_date = max(date),
            min_date = max_date %m-% months(1))
#> `summarise()` ungrouping output (override with `.groups` argument)

# create the models  
dat3 <- dat2 %>%
  rowwise() %>% 
  mutate(mod = list(lm(returns ~ mkt_cap,
                       data = filter(dat,
                                     date <= .env$max_date,
                                     date >= .env$min_date)))) 

# get the relevant informationen per week
dat3 %>%
  mutate(res = list(broom::tidy(mod)),
         broom::glance(mod)) %>% 
  select(week,
         res,
         adj.r.squared,
         mod_p.value = p.value,
         nobs) %>% 
  unnest(res) %>% 
  filter(term != "(Intercept)")

#> # A tibble: 6 x 9
#>    week term  estimate std.error statistic p.value adj.r.squared mod_p.value
#>   <dbl> <chr>    <dbl>     <dbl>     <dbl>   <dbl>         <dbl>       <dbl>
#> 1     1 mkt_~  1.01e-5   1.34e-5     0.756   0.529       -0.167        0.529
#> 2     2 mkt_~  9.26e-7   7.45e-7     1.24    0.245        0.0520       0.245
#> 3     3 mkt_~  2.56e-7   2.97e-7     0.864   0.400       -0.0152       0.400
#> 4     4 mkt_~  2.00e-8   1.42e-7     0.141   0.889       -0.0426       0.889
#> 5     5 mkt_~ -1.18e-7   1.61e-7    -0.736   0.467       -0.0150       0.467
#> 6     6 mkt_~ -3.23e-7   2.37e-7    -1.37    0.182        0.0271       0.182
#> # ... with 1 more variable: nobs <int>

Created on 2021-04-27 by the reprex package (v0.3.0)

Update

This approach can be easily expanded when working with more than one stock:

# lets append the same data and change stock to "B":
dat <- dat %>% 
  bind_rows({mutate(., stock = "B")})

# summarised data.frame by week and group with min and max date
dat2 <- dat %>% 
  group_by(stock, week) %>% 
  summarise(max_date = max(date),
            min_date = max_date %m-% months(1))
#> `summarise()` has grouped output by 'stock'. You can override using the `.groups` argument.

# create the models, and this time also filer for .env$stock
dat3 <- dat2 %>%
  rowwise() %>% 
  mutate(mod = list(lm(returns ~ mkt_cap,
                       data = filter(dat,
                                     stock == .env$stock,
                                     date <= .env$max_date,
                                     date >= .env$min_date)))) 

# get the relevant informationen per week (this stays the same!)
dat3 %>%
  mutate(res = list(broom::tidy(mod)),
         broom::glance(mod)) %>% 
  select(week,
         res,
         adj.r.squared,
         mod_p.value = p.value,
         nobs) %>% 
  unnest(res) %>% 
  filter(term != "(Intercept)")

#> Adding missing grouping variables: `stock`
#> # A tibble: 12 x 10
#> # Groups:   stock [2]
#>    stock  week term         estimate   std.error statistic p.value adj.r.squared
#>    <chr> <dbl> <chr>           <dbl>       <dbl>     <dbl>   <dbl>         <dbl>
#>  1 A         1 mkt_cap  0.0000101    0.0000134       0.756   0.529       -0.167 
#>  2 A         2 mkt_cap  0.000000926  0.000000745     1.24    0.245        0.0520
#>  3 A         3 mkt_cap  0.000000256  0.000000297     0.864   0.400       -0.0152
#>  4 A         4 mkt_cap  0.0000000200 0.000000142     0.141   0.889       -0.0426
#>  5 A         5 mkt_cap -0.000000118  0.000000161    -0.736   0.467       -0.0150
#>  6 A         6 mkt_cap -0.000000323  0.000000237    -1.37    0.182        0.0271
#>  7 B         1 mkt_cap  0.0000101    0.0000134       0.756   0.529       -0.167 
#>  8 B         2 mkt_cap  0.000000926  0.000000745     1.24    0.245        0.0520
#>  9 B         3 mkt_cap  0.000000256  0.000000297     0.864   0.400       -0.0152
#> 10 B         4 mkt_cap  0.0000000200 0.000000142     0.141   0.889       -0.0426
#> 11 B         5 mkt_cap -0.000000118  0.000000161    -0.736   0.467       -0.0150
#> 12 B         6 mkt_cap -0.000000323  0.000000237    -1.37    0.182        0.0271
#> # … with 2 more variables: mod_p.value <dbl>, nobs <int>

Created on 2021-04-27 by the reprex package (v0.3.0)

TimTeaFan
  • 17,549
  • 4
  • 18
  • 39
  • 1
    Yes, this is exactly what I wanted. Much appreciated. I will award the bounty to you (in a couple of hours this is possible). However, I do have another question. Is it possible to extend this code to allow for multiple stocks (like an extra grouping)? Extending dat2 by setting an extra group_by argument of stocks does not work. Do you think that is possible? – Cardinal Apr 27 '21 at 17:50
  • 1
    @Cardinal: This is actually really easy. I updated my answer which now shows how to work with more than one stock. We just need to include `stock` in the `group_by` of the summary `data.frame` and when creating the models we can just add `stock == .env$stock` to the `filter` call. – TimTeaFan Apr 27 '21 at 20:16
0

An ugly Base R solution (assuming you just want the predicted values returned):

# Allocate some memory such that each stock in data.frame
# can become an element in a list: df_list => empty list: 
df_list <- vector("list", length(unique(df$stock)))
# Split the data.frame into the list: df_list => list of data.frames: 
df_list <- with(df, split(df, stock))
# Number of weeks to consider in rolling regression in this case 4,
# approximating a month: n_weeks => integer scalar: 
n_weeks <- 4
# For each stock in the list: nested lists => stdout(console)
lapply(df_list, function(x){
  # Clean the week vector, filling NAs with values: 
  # week => integer vector
  x$week <- with(x, rev(na.omit(rev(week))[cumsum(!is.na(rev(week)))]))
  # Impute the first return value if it is missing:  
  x$returns[1] <- with(x, 
     ifelse(is.na(returns[1]), returns[which.min(!(is.na(returns)))],
       returns[1]
      )
   )
  # Interpolate the return using the previous value: 
  # returns => numeric vector
  x$returns <- with(x, na.omit(returns)[cumsum(!is.na(returns))])
  # For each week: 
   y <- lapply(unique(x$week), function(z){
     # Calculate the range for the regression: 
      rng <- if(z - n_weeks <= 0){
        seq_len(z)
      }else{
        seq(from = (z - n_weeks), to = z, by = 1)
      }
      # Subset the data: sbst => data.frame
      sbst <- x[x$week %in% rng,]
      # Calculate the regression: 
      predict(lm(returns ~ mkt_cap, data = sbst))
    }
   )
  # Return the list of regressions: 
  y
  }
)

Data:

df <- structure(list(stock = c("A", "A", "A", "A", "A", "A", "A", "A", 
"A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", 
"A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", 
"A", "A", "A", "A", "A", "A"), date = structure(17959:17998, class = c("IDate", 
"Date")), mkt_cap = c(10522834, 11659707, 11464531, 12217241, 
11619351, 11578687, 11658368, 12722921, 15429934, 16801600, 17898334, 
18492686, 20686683, 22299970, 22924182, 24174351, 24661467, 23351810, 
27826601, 30670482, 32802772, 31778387, 33237006, 34971479, 36774005, 
37594815, 38321816, 35167070, 35625396, 35764747, 28316242, 26124803, 
30390295, 28256485, 29807837, 30970364, 30470093, 30860276, 27946472, 
27662766), week = c(NA, NA, NA, NA, 1L, NA, NA, NA, NA, NA, NA, 
2L, NA, NA, NA, NA, NA, NA, 3L, NA, NA, NA, NA, NA, NA, 4L, NA, 
NA, NA, NA, NA, NA, 5L, NA, NA, NA, NA, NA, NA, 6L), returns = c(NA, 
9.7, -2.25, 5.8, -5.57, -0.899, 0.141, 8.2, 18.8, 7.98, 5.79, 
2.74, 10.7, 6.98, 2.24, 4.79, 1.48, -5.97, 17, 9.22, 6.21, -3.68, 
3.99, 4.59, 4.53, 1.71, 1.42, -9.08, 0.808, -0.094, -23.8, -8.53, 
14.6, -7.76, 4.87, 3.36, -2.1, 0.806, -10.4, -1.48)), class = "data.frame", row.names = c(NA, 
-40L))
hello_friend
  • 5,682
  • 1
  • 11
  • 15
0

Does slide_index() from the slider package do what you want?

    library(tidyverse)
    library(slider)
    library(broom)
    
    set.seed(1001)
    
    ## more or less the slider help page for slide_index()
    df <- data.frame(
      y = rnorm(100),
      x = rnorm(100),
      i = as.Date("2019-08-15") + c(0, 2, 4, 6:102) # <- irregular
    )
    
    head(df)
    #>            y           x          i
    #> 1  2.1886481  0.07862339 2019-08-15
    #> 2 -0.1775473 -0.98708727 2019-08-17
    #> 3 -0.1852753 -1.17523226 2019-08-19
    #> 4 -2.5065362  1.68140888 2019-08-21
    #> 5 -0.5573113  0.75623228 2019-08-22
    #> 6 -0.1435595  0.30309733 2019-08-23
    
    # 20 day rolling regression. Current day + 10 days back.
    out <- df %>% 
      mutate(model = slide_index(df, i, ~ lm(y ~ x, df), 
                       .before = 10, .complete = TRUE)) %>% 
      as_tibble() 
    
    out %>% 
      filter(!(map_lgl(model, ~ is_empty(.x)))) %>% 
      mutate(results = map(model, tidy)) %>% 
      unnest(cols = c(results))
    #> # A tibble: 186 x 9
    #>         y      x i          model  term         estimate std.error statistic p.value
    #>     <dbl>  <dbl> <date>     <list> <chr>           <dbl>     <dbl>     <dbl>   <dbl>
    #>  1 -0.623  0.741 2019-08-25 <lm>   (Intercept) -0.000347     0.115  -0.00302   0.998
    #>  2 -0.623  0.741 2019-08-25 <lm>   x           -0.0825       0.144  -0.575     0.567
    #>  3 -0.907  0.495 2019-08-26 <lm>   (Intercept) -0.000347     0.115  -0.00302   0.998
    #>  4 -0.907  0.495 2019-08-26 <lm>   x           -0.0825       0.144  -0.575     0.567
    #>  5 -1.59  -1.13  2019-08-27 <lm>   (Intercept) -0.000347     0.115  -0.00302   0.998
    #>  6 -1.59  -1.13  2019-08-27 <lm>   x           -0.0825       0.144  -0.575     0.567
    #>  7  0.303 -1.16  2019-08-28 <lm>   (Intercept) -0.000347     0.115  -0.00302   0.998
    #>  8  0.303 -1.16  2019-08-28 <lm>   x           -0.0825       0.144  -0.575     0.567
    #>  9  1.63  -0.713 2019-08-29 <lm>   (Intercept) -0.000347     0.115  -0.00302   0.998
    #> 10  1.63  -0.713 2019-08-29 <lm>   x           -0.0825       0.144  -0.575     0.567
    #> # … with 176 more rows    
Kieran
  • 1,213
  • 10
  • 9
  • Yes kind off. However, there are some complications. I only want to run a regression for times when week is not equal to NA. So for week==1, week==2 etc. I could of course only keep the solutions when is.na(week)==FALSE. However, this is only a snippet of the data. The whole data set is much larger. Also, I prefer a solution where we use the date instead of the number of past (you say 10) observations because of irregular spacing in the data set. – Cardinal Apr 26 '21 at 21:20