0

I have 4 raters who have rated 10 subjects. Because I have multiple raters (and in my actual dataset, these 4 raters have rated the 10 subjects on multiple variables) I've chosen to use the Light's Kappa to calculate inter-rater reliability. I've run the light's kappa code shown below and included an example of my data.

My question is that the resulting kappa value (kappa=0.545) is fairly low even though the raters agree on almost all ratings? I'm not sure if there is some other way to calculate inter-rater reliability (e.g., pairwise combinations between raters?) Any help is appreciated.

subjectID<- c(1,2,3,4,5,6,7,8,9,10)
rater1<- c(3,2,3,2,2,2,2,2,2,2)
rater2<-c(3,2,3,2,2,2,2,2,2,2)
rater3<- c(3,2,3,2,2,2,2,2,2,2)
rater4<-c(3,2,1,2,2,2,2,2,2,2)

df <- data.frame(subjectID, rater1, rater2, rater3, rater4)

kappam.light(df)
Roaring
  • 59
  • 7
  • you need to review the docs, your input is not correct `ratings n*m matrix or dataframe, n subjects m raters` – rawr Sep 28 '22 at 18:57
  • Thanks for your response. I'm very new to R so I'm not sure I understand the documentation then for this function. Does that mean I need to re-shape my data? – Roaring Sep 28 '22 at 19:08
  • 1
    no just leave out subject ID when making `df` – rawr Sep 28 '22 at 19:09

0 Answers0