3

I guess I'm missing some basic knowledge and I may be overlooking something important here...

Background: I have a dataset in which animals from 4 different groups (1 control and 3 treatment groups) underwent grip strength testing. Each trial consisted of ~5 measurements. In order to see how the treatment groups were different from the controls, a linear mixed model was done using the lmer function from lme4 as follows:

model1<-lmer(griPASTA~group+(1|ID), data=all)

griPASTA is a variable indicating grip strength, the group indicates treatment, and ID is the ID of each animal.

sjPlot::tab_model(model1, p.adjust="none")

creates this: [OHDA 6 vs CTR] p = 0.001 enter image description here

... and including the holm adjustment

sjPlot::tab_model(model1, p.adjust="holm")

provides this: [OHDA 6 vs CTR] p = 0.004 enter image description here

However, when I try to specify contrasts myself and use the same correction by holm, I get pretty different p values.

e.g.

m1_emm <- emmeans(model1, specs = c("group"))
CTR <- c(1,0,0,0)
OHDA_12 <- c(0,1,0,0)
OHDA_6 <- c(0,0,1,0)
OHDA_6X <- c(0,0,0,1)
m1_simple<-contrast(m1_emm, method = list("CTR - OHDA 12" = CTR - OHDA_12,
                           "CTR - OHDA 6"= CTR - OHDA_6,
                           "CTR - OHDA 6 w/o REB"= CTR - OHDA_6X),
     adjust = "holm")%>% summary (infer=T)

[OHDA 6 vs CTR] p = 0.0100 enter image description here

For comparison, this is what I get when I remove the correction:

m1_simple<-contrast(m1_emm, method = list("CTR - OHDA 12" = CTR - OHDA_12,
                           "CTR - OHDA 6"= CTR - OHDA_6,
                           "CTR - OHDA 6 w/o REB"= CTR - OHDA_6X),
     adjust = "none")%>% summary (infer=T)

[OHDA 6 vs CTR] p = 0.0033 enter image description here

So, the holm-corrected p-value for the comparison between CTR and OHDA 6 is 0.004 using the tab_model with p.adjust, and 0.01 if I introduce the correction in the emmeans contrast function.

Do you have any idea what is the source of this difference and how to approach it? I assume I'm missing something and the adjustments actually take different things into account in the first and the second approach?

Anyways, I guess I'm missing some basic stats/R knowledge and I'm too stupid to comprehend this at the moment so any help would be really appreciated.

Vadim Kotov
  • 8,084
  • 8
  • 48
  • 62
jan
  • 31
  • 3
  • Could you please post your results as text rather than as images? – Ben Bolker May 08 '21 at 16:31
  • Thanks for the suggestion, Ben. I added p-values as text. – jan May 08 '21 at 18:27
  • It would still be nice to have the *full* output pasted as text, and to have a [mcve]. If I am reading correctly, the p-values differ between `tab_model()` and `contrast()` even *without* Holm correction, so I would look there first. `contrast()` says it is using Kenward-Roger approximation; do you know what `tab_model` is doing? – Ben Bolker May 08 '21 at 21:31
  • 1
    And it would be much simpler to use `method = "trt.vs.ctrl1"` rather than manually constructing the same contrasts. – Russ Lenth May 09 '21 at 18:30
  • 2
    I looked at some of the images, and what I see is that the results are of regression coefficients. Three of the four regression coefficients are comparisons, but the other one is the intercept. So you were comparing a set of 4 Holm-corrected P values with another set of 3 Holm-corrected P values. That explains the difference, since the Holm correction is applied to an entire set of P Values. – Russ Lenth May 09 '21 at 18:39
  • 1
    It is also possible that different d.f. calculations are done. emmeans uses the Kenward-Roger method: I don't know what tab_model does. – Russ Lenth May 10 '21 at 02:06
  • @RussLenth Thanks. I may be wrong, but if this is the case wouldn't the correction of a set of 4 p-values be more restrictive? Here, the tab_model function seems to be less restrictive (e.g. 0.004 vs 0.01). I'll try to look into the degrees of freedom approximation method. – jan May 10 '21 at 06:39
  • There indeed does seem to be a lot that just doesn't make much sense. I wonder if you were to start completely over and do all this again, you'd get more reconcilable results. Sometimes, old versions of objects get mixed up with new versions, and I wonder if that's what's happening here. – Russ Lenth May 10 '21 at 12:48

1 Answers1

3

I tried a similar example with fewer moving parts:

data(PlantGrowth)
mod <- lm(weight ~ group, data = PlantGrowth)

## sjPlot::tab_model(mod, show.se = TRUE, show.stat = TRUE, p.adjust = "none")
## Yields the same results as...

summary(mod)$coefficients
#>             Estimate Std. Error   t value     Pr(>|t|)
#> (Intercept)    5.032  0.1971284 25.526514 1.936575e-20
#> grouptrt1     -0.371  0.2787816 -1.330791 1.943879e-01
#> grouptrt2      0.494  0.2787816  1.771996 8.768168e-02

As noted in the comment, the sjPlot::tab_model() call above just yields an HTML version of the coefficients in the summary() of the model.

So I am going to get the P values, then try the Holm adjustment on them:

p.unadj <- summary(mod)$coefficients[, 4]

# Unadjusted P values...
round(p.unadj, 4)
#> (Intercept)   grouptrt1   grouptrt2 
#>      0.0000      0.1944      0.0877

# Holm-adjusted P values...
round(p.adjust(p.unadj, "holm"), 4)
#> (Intercept)   grouptrt1   grouptrt2 
#>      0.0000      0.1944      0.1754

# Holm adjustment, excluding the intercept...
round(p.adjust(p.unadj[-1], "holm"), 4)
#> grouptrt1 grouptrt2 
#>    0.1944    0.1754

# Replacing the intercept's P value with a larger one...
round(p.adjust(c(0.15, p.unadj[-1]), "holm"), 4)
#>           grouptrt1 grouptrt2 
#>     0.300     0.300     0.263 

I verified that sjPlot::tab_model(mod, p.adjust = "holm") yields the same adjusted P values (0.0000, 0.1944, 0.1754) as shown above. In this particular example, the adjusted P values are the same when we excluded the intercept, but that would not necessarily be true for a different pattern of P values, as shown in the final result

Now look at the emmeans results:

library(emmeans)

EMM <- emmeans(mod, "group")
CON <- contrast(EMM, "trt.vs.ctrl1")

# Unadjusted P values
test(CON, adjust = "none")
#>  contrast    estimate    SE df t.ratio p.value
#>  trt1 - ctrl   -0.371 0.279 27 -1.331  0.1944 
#>  trt2 - ctrl    0.494 0.279 27  1.772  0.0877

# Holm-adjusted P values
test(CON, adjust = "holm")
#>  contrast    estimate    SE df t.ratio p.value
#>  trt1 - ctrl   -0.371 0.279 27 -1.331  0.1944 
#>  trt2 - ctrl    0.494 0.279 27  1.772  0.1754 
#> 
#> P value adjustment: holm method for 2 tests

Created on 2021-05-10 by the reprex package (v1.0.0)

These results match those obtained above using the unadjusted P values from summary() and excluding the intercept before the Holm adjustment.

I am convinced that the discrepancies noted in the OP are due to some objects being "stale", i.e., obtained before some change was made, while other results were obtained after. I suggest starting fresh: fit the model again, and re-compute all the results that are being compared. I have had the same thing happen to me at times, because it is easy to forget one step that was made in trying out a whole bunch of other things.

The degrees-of-freedom issue could cause some discrepancies in the case of a mixed model, but I'd expect them to be minor unless the d.f. turn out to be extremely small.

Russ Lenth
  • 5,922
  • 2
  • 13
  • 21