-1

I have a list of 282 items that has been classified by 6 independent coders into 20 categories.

The 20 categories are defined by words (example "perceptual", "evaluation" etc).

The 6 coders have different status: 3 of them are experts, 3 are novices.

I calculated all the kappas (and alphas) between each pair of coders, and the overall kappas among the 6 coders, and the kappas between the 3 experts and between the 3 novices.

Now I would like to check whether there is a significant difference between the interrater agreements achieved by the experts vs those achieved by the novices (whose kappa is indeed lower).

How would you approach this question and report the results?

thanks!

StupidWolf
  • 45,075
  • 17
  • 40
  • 72
  • Sorry, but asking "How do I do this? Possibly with Excel? otherwise SPSS? Or R?" is off-topic because _Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it._ – Richard Erickson Mar 23 '16 at 13:25
  • ok sorry, i edited my question, I hope it looks better now? – mariannaBol Mar 23 '16 at 13:36

1 Answers1

0

You can at least simply obtain the Cohen's Kappa and its sd in R (<- by far the best option in my opinion).

The PresenceAbsence package has a Kappa (see ?Kappa) function.

You can get the package with the regular install.packages("PresenceAbsence"), then pass a confusion matrix, i.e.:

# we load the package
library(PresenceAbsence)
# a dummy confusion matrix
cm <- matrix(round(runif(16, 0, 10)), nrow=4)
Kappa(cm)

you will obtain the Kappa and its sd. As far as I know there are limitations about testing using the Kappa metric (eg see https://en.wikipedia.org/wiki/Cohen's_kappa#Significance_and_magnitude).

hope this helps

Vincent Bonhomme
  • 7,235
  • 2
  • 27
  • 38