2

I have a continuous response variable yld and a categorical predictor check (with 3 levels). I did an one-way ANOVA and a post-hoc test to see which levels differ from each other.

mdl<-aov(sqrt(var$yld) ~ var$check); summary(mdl);TukeyHSD(mdl) 

               Df Sum Sq Mean Sq F value   Pr(>F)    
var$check      2   5162  2581.2   13.51 1.46e-06 ***
Residuals   2775 530395   191.1                     

Tukey multiple comparisons of means
95% family-wise confidence level

Fit: aov(formula = sqrt(var$yld) ~ var$check)

$`var$check`

         diff       lwr       upr     p adj
 NC-LC -3.0123196 -4.529649 -1.494991 0.0000101
 RC-LC -2.8330205 -4.348031 -1.318010 0.0000358
 RC-NC  0.1792991 -1.310563  1.669161 0.9570495

Now, this data is collected over multiple site, so I want to use site.code as my random effect.

library(lme4)
mdl1<-lmer(sqrt(yld) ~ check + (1 | site.code),data=var)
summary(mdl1)

This gives me different outputs but the most important one is:

Fixed effects:
        Estimate Std. Error t value
(Intercept)  50.7267     1.3028   38.94
checkNC      -2.7075     0.5449   -4.97
checkRC      -2.5048     0.5441   -4.60

It takes LC level as the intercept and checks how NC and RC are different from the intercept. I have two questions:

1) Why is there no p-value displayed here in the output of mdl1 2) This output compares NC and RC with the intercept. Is there any post-hoc to do pair-wise comparison of all levels?

Thanks

user53020
  • 889
  • 2
  • 10
  • 33

1 Answers1

1

Here is the solution

install.packages("multcomp");library(multcomp)
summary(glht(mdl1, linfct=mcp(check="Tukey")))
user53020
  • 889
  • 2
  • 10
  • 33