-1

I am sampling 159 videos, trying to do experiments and see how much sample rate affects some evaluation score.

I got a pretty messy picture enter image description here

Each line is a single video. Y axis is the evaluation score, X axis is number of frames I skip (in other words, sample rate).

So the picture tells me it is pretty stable the sample rate is 1, 3, 5, 8, and 10. But I want to have a confidence interval, say, "97.5% confident the score is stable for these sample rates". Here 97.5% is fake. I know how to get mean/variance for a single video, and how to get a real confidence value? Can someone point me the resource or teach me the way to do it? I'm using python.

Thank you!

hpnhxxwn
  • 179
  • 1
  • 5
  • 11

1 Answers1

0

If you assume normal distribution, then mean +- 1.96 * standard deviation for a 95 % confidence interval.

user2974951
  • 9,535
  • 1
  • 17
  • 24
  • 3
    That's incorrect. Assuming an underlying normal distribution, the 95% CI is given by the `mean ± 1.96 * se`, where the standard error (of the mean) is `se = sd / sqrt(n)`. Note the critical difference between "standard error" and "standard deviation"! – Maurits Evers Sep 13 '18 at 12:10