2

I am trying to do Fisher's exact test for combinations of an n x 2 dataframe and from what I have read, pairwise fishers seems to be what I want to use (see here). However, in doing so it produced p-value results that didn't look right, so I decided to manually check on combinations and got different results. I've included what I hope is a reproducible example to highlight what I've tried. Perhaps I'm doing something wrong with the R code, as I'm still relatively inexperienced, or I may be completely misunderstanding what the pairwise tests are meant to compute - if so, sorry and I can remove the question if it's not appropriate for SO.

# Packages -----------------------------------------------------------

library("tidyverse")
library("janitor")
library("RVAideMemoire")
library("fmsb")

# Generate Data -----------------------------------------------------------

set.seed(1)
test <-
  tibble(
    "drug" = sample(
      c("Control", "Treatment1", "Treatment2"), 
      size = 300,
      prob = c(0.1, 0.4, 0.3),
      replace = TRUE),
    "country" = sample(
      c("Canada", "United States"),
      size = 300,
      prob = c(0.4, 0.6),
      replace = TRUE
    ),
    "selected" = sample(
      c(0, 1), 
      size = 300, 
      prob = c(0.1, 0.65), 
      replace = TRUE)
  )

test2 <- test %>%
  filter(selected == 1)

test2_tab <- test2 %>%
  tabyl(drug, country) %>%
  remove_rownames() %>%
  column_to_rownames(var = colnames(.[1])) %>%
  as.matrix()

When I run the following pairwise tests I get this as the output (I used 2 packages just to make sure it wasn't that I just implemented one incorrectly).

# Pairwise ----------------------------------------------------------------

RVAideMemoire::fisher.multcomp(test2_tab, p.method = "bonferroni")
fmsb::pairwise.fisher.test(test2_tab, p.adjust.method = "bonferroni")
        Pairwise comparisons using Fisher's exact test for count data

data:  test2_tab

           Control Treatment1
Treatment1       1          -
Treatment2       1          1

P value adjustment method: bonferroni



    Pairwise comparisons using Pairwise comparison of proportions (Fisher) 

data:  test2_tab 

           Control Treatment1
Treatment1 1       -         
Treatment2 1       1         

P value adjustment method: bonferroni 

However, when I create the individual tables to perform individual Fisher's test, like below, I get different results.

# Individual --------------------------------------------------------------

drug.groups2 <- unique(test2$drug)

# Just to check the correct 2x2 tables are produced
# combn(drug.groups2, 2, function(x) {
#   id <- test2$drug %in% x
#   cross_tabs <- table(test2$drug[id], test2$country[id])
# }, simplify = FALSE)


combn(drug.groups2, 2, function(x) {
  id <- test2$drug %in% x
  cross_tabs <- table(test2$drug[id], test2$country[id])
  fishers <- fisher.test(cross_tabs)
  fishers$data.name <-
    paste(
      unique(
        as.character(test2$drug[id])
      ),collapse="-")
  return(fishers)
}, simplify = FALSE)

[[1]]

    Fisher's Exact Test for Count Data

data:  Treatment1-Treatment2
p-value = 0.3357
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
 0.7566901 2.4175206
sample estimates:
odds ratio 
  1.347105 


[[2]]

    Fisher's Exact Test for Count Data

data:  Treatment1-Control
p-value = 0.4109
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
 0.2560196 1.6292583
sample estimates:
odds ratio 
 0.6637235 


[[3]]

    Fisher's Exact Test for Count Data

data:  Treatment2-Control
p-value = 1
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
 0.3294278 2.3146386
sample estimates:
odds ratio 
 0.8940101 
arnold-c
  • 337
  • 2
  • 13
  • 2
    Isn't it due to Bonferroni correction which is applied to pairwise comparisons while is not applied to individual tests? – Łukasz Deryło May 03 '20 at 19:47
  • 1
    yeah you called it explicitly fisher.multcomp(test2_tab, p.method = "bonferroni"). if you do a rough calculation, you have 3 comparisons, your minimum p-value is 0.3357 which makes 3 * 0.3357 = 1 – StupidWolf May 03 '20 at 20:06
  • Yep - that was a silly oversight. Thanks for the help – arnold-c May 03 '20 at 20:27

2 Answers2

1

Isn't it due to Bonferroni correction which is applied to pairwise comparisons while is not applied to individual tests?

Łukasz Deryło
  • 1,819
  • 1
  • 16
  • 32
0

As clearly pointed out in the comments by Lukasz and StupidWolf, I had forgotten that I had applied the p.method = "bonferroni" correction, and the results are the same with the function call p.method = "none" ...

arnold-c
  • 337
  • 2
  • 13