0

I'm doing a meta-regression-analysis for Granger non-causality tests in my Master thesis. The effects of interest are F- and chi-square distributed, so to use theme in a meta-regression they must be converted to normal variates. Right now, I'm using probit-function (inverse of the standard normal cumulative distribution) for this. And this is basically its the qnorm() of the p-values (as far as I know).

My problem is now, the underlying studies sometimes report p-values of 0 or 1. Transforming them with qnorm() gives me Inf and -Inf values. My solution approach is to exchange 0 p-values with values near 0, for example 1e-180 and 1 p-values with values near 1, for example 0.9999999999999999 (only 16 9 are possible because R is changing the results for more "9"s to 1).

Does anybody know a better solution for this problem? Is this mathematically reasonable? Excluding the 0 and 1 p-values would change the results completely and therefor is, in my honest opinion, wrong. My code sample right now:

df$p_val[df$p_val == 0] <- 1e-180
df$p_val[df$p_val == 1] <- 0.9999999999999999 

df$probit <- -qnorm(df$p_val)

The minus in front of the qnorm helps intuition, so that positive values are associated wth rejecting the null hypothesis of non-causality at higher levels of significance.

I would be really glad for support / hints / etc.!

BongoKing
  • 121
  • 1
  • 1
  • 3
  • You need to remember that the studies which report a p value of 0 are themselves approximating a very small but non-zero p value. – Allan Cameron Feb 20 '20 at 12:02
  • Yes, thanks. But I don't know, how small it actually is. Is it your idea to assume the p-value of 0 based on the decimal digits of the results of every study? – BongoKing Feb 20 '20 at 13:51
  • Most studies with a very low p value are reported with an upper bound for the p value, like p < 0.0001. In this case, I guess you can assume the upper bound (0.0001) – Allan Cameron Feb 20 '20 at 13:55
  • Okay, thanks a lot! – BongoKing Jan 21 '21 at 10:22

0 Answers0