The adonis function in R apparently defaults to 3 sig-figs in the output p-values. Can this be altered such that higher sig-figs are output?
Asked
Active
Viewed 1,286 times
0
-
1You can see in the code for the `adonis` function that the p-value is calculated by P <- (rowSums(t(f.perms) >= F.Mod - EPS) + 1)/(permutations + 1) Since it’s divided by the number of permutations, that will determine the number of decimal places. -Keaton Stagaman – Robert Steury Dec 28 '16 at 20:07
-
1Why do you want more than 3 significant figures? – Richard Telford Dec 28 '16 at 20:07
-
It seems unclear what you mean. Are you saying that you want 4 or more cut points for the significance stars? Or you just want to see more decimal places in the p-value column? By the way, you should always provide a reproducible example (one can be found on the `?adonis` doc page.) – Hack-R Dec 28 '16 at 20:10
-
@RobertSteury that has nothing to do with how many digits are printed in the output... sounds like a `print` option somewhere (and hopefully not explicit `round`ing anywhere in the code). – Gregor Thomas Dec 28 '16 at 20:10
-
But what about only 1 decimal? My data (18 samples, two groups) consistently gives a p-value to only one decimal (0.1), and the adonis says it's running 719 permutations. Would adonis give ony 1 decimal if it came out 0.100? Such as 72/720? Or is there something else going on here? – emudrak Sep 01 '20 at 16:51
1 Answers
0
I think looking at p-values to more than three significant digits is a very bad idea. However, adonis
already gives you this, for number of permutations other than 999:
# Code
library(vegan)
data(dune)
data(dune.env)
## default test by terms
adonis(dune ~ Management*A1, data = dune.env, permutations = 1000)
# Results
Permutation test for adonis under reduced model
Terms added sequentially (first to last)
Permutation: free
Number of permutations: 1000
Call:
adonis(formula = dune ~ Management * A1, data = dune.env, permutations = 1000)
Permutation: free
Number of permutations: 1000
Terms added sequentially (first to last)
Df SumsOfSqs MeanSqs F.Model R2 Pr(>F)
Management 3 1.4686 0.48953 3.2629 0.34161 0.001998 **
A1 1 0.4409 0.44089 2.9387 0.10256 0.016983 *
Management:A1 3 0.5892 0.19639 1.3090 0.13705 0.228771
Residuals 12 1.8004 0.15003 0.41878
Total 19 4.2990 1.00000

Peter Ellis
- 5,694
- 30
- 46
-
I figured that out thank you. Is it a bad idea if I am testing the influence of varying DNA sequencing depth on my permanova results? – Robert Steury Dec 29 '16 at 03:49
-
I don't know about the field, but it will certainly give you a false sense of precision. The p values are the probability of seeing the data given the model is correct, and the null hypothesis sets of parameters are the correct one. As the model is certainly wrong in some unknown unquantifiable way, it's a mistake to be too precise about it.... Perhaps ask a new question on Cross-Validated (the statistics stack exchange) with a more precise query and the background info? I know that they have developed whole fields of DNA multiple-comparisons stats to deal with the false positive problem. – Peter Ellis Dec 29 '16 at 03:57
-
You should not mix number of significant digits with numerical accuracy. For instance, the following will give you more than three significant digits, but no accuracy to talk about: `adonis2(dune ~ Management*A1, dune.env, permutations = 13)` – Jari Oksanen Dec 29 '16 at 06:23
-
Thanks for the feedback I understand Jari and Peter what you both mean. I am somewhat grasping straws. If the p value is <0.05 it doesn't matter what it was rounded off from essentially because in either case the null hypothesis is rejected. What lies at the heart of my question is whether or not sequencing depth influences the probability of rejecting a null hypotheses. Nonetheless, I was surprised to find that even at very shallow sampling depth, 100 reads/sample, anova models held up the same as if I'd sampled at 25K reads/sample. – Robert Steury Dec 29 '16 at 21:00