1

For my study I performed a meta-analysis of viral load measurements to test whether the specific interaction between A and B influences its levels.

This is the forest plot I obtained using R: MetaAnalysis.jpg

However, I don't know how to interpret it. I understand the this result is significant because p=0.0073 and because the overall effect estimate 95% CI does not overlap 0. However, what does it mean that the diamond is on the right side of the forest plot?

Community
  • 1
  • 1
Svalf
  • 151
  • 1
  • 9
  • It means the overall effect is positive. Since the effect is an interaction, this means the effect of A becomes larger (or more positive, or less negative) as B increases, and the other way around. Note that this question is not about programming, but about statistics and is therefore off-topic here. – Axeman Sep 20 '16 at 11:36

1 Answers1

1

It depends on how the individual effect sizes are computed. This seems a forest plot of a meta-analysis of the correlations between A and B for each viral load (you indicate an association p-value). Perhaps you use the difference in the z-transformed correlations across the different viral loads and the associated standard deviations (?). If so, the way you compute this difference will help you interpret the overall effect size. Is it computed as value for large viral load minus value for small viral load? If so, the overall estimate shows that there is a larger effect of the interaction between A and B in the large viral loads. (If the diamond was on the left side of the vertical dotted line - i.e. the line of ‘no-effect’ - it would have reflected a larger interaction effect in small viral loads.)

One additional comment: you seem to estimate the overall effect size using random effects (incidentally, the sizes of the black squares for each individual study reflects the weight assigned to that study). The heterogeneity test seems non-significant (see heterogeneity p-value), meaning that heterogeneity does not affect the results of your meta-analysis. When this test turns out significant, you need to consider a mixed effects model (i.e. to find moderators in your dataset that help you explain this heterogeneity). Otherwise results are unreliable.

NRLP
  • 568
  • 3
  • 16
  • 1
    Thank you Luminita for your answer and all the explanations. To perform the forest plot, these are the variables I used: the name of the cohorts, the estimate (which is the beta) and the standard error. I then performed a Fixed-effects model using the R package "metafor" (using `method="FE"`). Should I have used the REML or ML instead? And if I understood correctly, it is good that the heterogeneity test is non-significant, otherwise I would have to do a mixed effects model. Is that right? – Svalf Sep 21 '16 at 11:02
  • We typically use the fixed effects model when the observations/studies from the sample are obtained through identical methods and we have strong reasons to assume that the true effect size is _exactly_ the same in all studies. Otherwise we should rather use random effects to account for the heterogeneity between these observations/studies. I am not sure what is the setup of your investigation, but here it is a resource that might help you with making the best choice for your situation: [FEvsRE](https://www.meta-analysis.com/downloads/Meta-analysis%20fixed%20effect%20vs%20random%20effects.pdf) – NRLP Sep 21 '16 at 14:27
  • Regarding the heterogeneity test, yes, it is good that it is not significant. Alternatively, you could also look at the heterogeneity indicator I2 (provided in the results of your model), which shows the impact of the heterogeneity between studies on the meta-analysis. Studies contribute differently to the overall heterogeneity. However, our concern is not the between-studies heterogeneity per se, but how does this affect the results of the meta-analysis. – NRLP Sep 21 '16 at 14:31
  • Here it is a condensed review of the indicator I2 from Cochrane Collaboration (it explains the importance of looking at I2, its computation as well as the thresholds’ interpretation): [I2](http://handbook.cochrane.org/chapter_9/9_5_2_identifying_and_measuring_heterogeneity.htm) – NRLP Sep 21 '16 at 14:31
  • When heterogeneity is affecting the results, we need to find ways to reduce it. Ideally, we manage to reduce/explain it by adding moderators/variables to the model. If it is not possible (i.e. we cannot find relevant moderators) a quick fix is to visually inspect heterogeneity plots to see what studies are generating most of heterogeneity (in “metaphor” package you can visualize this by fitting a “baujat” plot after fitting your model as `baujat(name_of_your_model)`). Then, you remove the culprit study(es) one by one from the data and go back to fitting the model. – NRLP Sep 21 '16 at 14:32
  • The last option (i.e. removing studies) is less desirable than using moderators because we also need to find an explanation why those observations/studies were removed: Were they mistakenly part of the data? If they followed the same research methodology, why are they so different? Basically, are we right to remove it because they are indeed different, or we just ‘manipulated’ the data for our convenience? – NRLP Sep 21 '16 at 14:32