I recently computed Fisher's Exact Test for a 2x2 contingency table using SciPy's built in fisher_exact() function. I'm using their example code from the SciPy docs:
>>> from scipy.stats import fisher_exact
>>> import numpy as np
>>> table = np.array([[6, 2], [1, 4]])
>>> res = fisher_exact(table, alternative='two-sided')
>>> res[1]
0.10256410256410257
Then I use the formula for Fisher's Exact test using factorials of each pairwise sum divided by factorials of each cell count + factorial of all the counts summed. Here is a website with that formula: https://www.statology.org/fishers-exact-test/ Here is my code implementing the formula:
>>> import numpy as np
>>> from scipy.stats import fisher_exact
>>> from scipy.special import factorial
>>> table = np.array([[6, 2], [1, 4]])
>>> res = fisher_exact(table)
>>> fishers_plug_in = (factorial(table[0][0]+table[0][1])*factorial(table[0][0]+table[1][0])*factorial(table[1][0]+table[1][1])*
factorial(table[0][1]+table[1][1])/(factorial(table[0][0])*factorial(table[0][1])*
factorial(table[1][0])*factorial(table[1][1])*factorial(table[0][0]+table[0][1]+table[1][0]+table[1][1])))
>>> print(fishers_plug_in)
0.08158508158508158
Does anyone have any idea why the calculated P-values are different? My best guess is that SciPy uses some sort of approximation for the factorials either in the fishers_exact() function for larger contingency tables, but I can't find any documentation about this.