So I had this question but the scope got a little larger/more complicated.
Basically I want to combine two tables and calculate the weighted average for any duplicate IDs. The problem is I will have multiple sets of columns that will need to use different weights. Here's my two datasets (RMS1 and RMS2) and the desired outcome (Joined):
RMS1:
id,freq1,sev1,count1,freq2,sev2,count2
111 0 2 50 1 2 25
222 1 3 75 2 4 50
RMS2:
id,freq1,sev1,count1,freq2,sev2,count2
222 2 4 25 6 6 200
333 4 5 60 3 2 20
Joined:
id freq1 sev1 freq2 sev2
111 0 2 1 2
222 1.25* 3.25* 5** 5.5**
333 4 5 3 2
So the * values are weighted averages based on count1, but the ** values are weighted averages based on count2 (at least they should be, I tried to do the math quickly). My entire dataset has 13 groups that use separate counts for weights. I have this code but obviously it needs to be expanded to take in multiple weights:
Joined <- bind_rows(RMS1, RMS2) %>%
group_by(id) %>%
summarise_at(vars(-count1), funs(weighted.mean(., count1))) %>%
as.data.frame()
Being new to R I have no idea where to even start. I found a possibly related question, but it's going over my head. Thanks in advance.