What is the most straightforward way to perform a t-test in python and to include CIs of the difference? I've seen various posts but everything is different and when I tried to calculate the CIs myself it seemed slightly wrong... Here:
import numpy as np
from scipy import stats
g1 = np.array([48.7107107,
36.8587287,
67.7129929,
39.5538852,
35.8622661])
g2 = np.array([62.4993857,
49.7434833,
67.7516511,
54.3585559,
71.0933957])
m1, m2 = np.mean(g1), np.mean(g2)
dof = (len(g1)-1) + (len(g2)-1)
MSE = (np.var(g1) + np.var(g2)) / 2
stderr_diffs = np.sqrt((2 * MSE)/len(g1))
tcl = stats.t.ppf([.975], dof)
lower_limit = (m1-m2) - (tcl) * (stderr_diffs)
upper_limit = (m1-m2) + (tcl) * (stderr_diffs)
print(lower_limit, upper_limit)
returns:
[-30.12845447] [-0.57070077]
However, when I run the same test in SPSS, although I have the same t and p values, the CIs are -31.87286, 1.17371, and this is also the case in R. I can't seem to find the correct way to do this and would appreciate some help.