1

I want to simulate the effect of different kinds of multiple testing correction such as Bonferroni, Fisher's LSD, DUncan, Dunn-Sidak Newman-Keuls, Tukey, etc... on Anova.

I guess I should simply run a regular Anova. And then accept as significant p.values which I calculate by using p.adjust. But I'm not getting how this p.adjust function works. Could give me some insights about p.adjust() ?

when running:

> p.adjust(c(0.05,0.05,0.1),"bonferroni")
# [1] 0.15 0.15 0.30

Could someone explain as to what does this mean?

Thank you for your answer. I kinda know a bit of all that. But I still don't understand the output of p.adjust. I'd expect that...

P.adjust(0.08,'bonferroni',n=10)

... would returns 0.008 and not 0.8. n=10 doesn't it mean that I'm doing 10 comparisons. and isn't 0.08 the "original alpha" (I mean the threshold I'd use to reject the NULL hypothesis if I had one simple comparison)

Remi.b
  • 17,389
  • 28
  • 87
  • 168

1 Answers1

4

You'll have to read about each multiple testing correction technique, whether it be False Discovery Rate (FDR) or Family-Wise Error Rate (FWER). (Thanks to @thelatemail for pointing out to expand the abbreviations).

Bonferroni correction controls the FWER by setting the significance level alpha to alpha/n where n is the number of hypotheses tested in a typical multiple comparison (here n=3).

Let's say you are testing at 5% alpha. Meaning if your p-value is < 0.05, then you reject your NULL. For n=3, then, for Bonferroni correction, you could then divide alpha by 3 = 0.05/3 ~ 0.0167 and then check if your p-values are < 0.0167.

Equivalently (which is directly evident), instead of checking pval < alpha/n, you could take the n to the other side pval * n < alpha. So that the alpha remains the same value. So, your p-values get multiplied by 3 and then would be checked if they are < alpha = 0.05 for example.

Therefore, the output you obtain is the FWER controlled p-value and if this is < alpha (5% say), then you would reject the NULL, else you'd accept the NULL hypothesis.

For each tests, there are different procedures to control the false-positives due to multiple testing. Wikipedia might be a good start point to learn about other tests as to how they correct for controlling false-positives.

However, your output of p.adjust, gives in general multiple-testing corrected p-value. In case of Bonferroni, it is FWER controlled p-value. In case of BH method, it is FDR corrected p-value (or also otherwise called q-value).

Hope this helps a bit.

Arun
  • 116,683
  • 26
  • 284
  • 387