1

Trying to calculate the inter-reliability of one rater for 3 different variables which was measured 10 time twice. Each column represents the difference between first and second measurements. Not 100% sure if this is the correct way to calculate the inter-reliability error. I am not sure but maybe I should calculate a Kappa coefficient.

       ID  Delta_A    Delta_B       Delta_C
1  300206     -0.1     -0.2           1.3
2  100114      0.1     -0.4          -1.0
3  200211      0.0     -0.2          -1.0
4  200210      0.1      0.1          -0.3
5  200306     -0.1     -0.1           0.9
6  200212      0.0     -0.2          -1.0
7  100128      0.0      0.1          -2.6
8  200317     -0.1      0.0           0.9
9  200126     -0.1     -0.3          -0.3
10 100126     -0.1     -0.6          -0.4

I used icc(df$Delta_A)and I got this error.

Single Score Intraclass Correlation

   Model: oneway 
   Type : consistency 

   Subjects = 10 
     Raters = 1 
     ICC(1) = NA

 F-Test, H0: r0 = 0 ; H1: r0 > 0 
     F(9,0) = NA , p = NA 

 95%-Confidence Interval for ICC Population Values:
  NA < ICC < NA
Warning messages:
1: In qf(1 - alpha/2, ns - 1, ns * (nr - 1)) : NaNs produced
2: In qf(1 - alpha/2, ns * (nr - 1), ns - 1) : NaNs produced

Any help is appreciate.

Thank you-

Rooz
  • 479
  • 1
  • 4
  • 6

0 Answers0