0

I have a 40-long list of means and variances relating to model error rates. I am looking to determine which models are statistically better (smaller error rates) than others.

Assuming error rates are normally distributed, I am presently looking up z-scores manually, which is working but taking a long time. Is there a more pythonic way to create a matrix of probability scores comparing, for instance, model a vs model b, model b vs model c, model a vs model c?

I haven't included any code thus far as I am currently working in excel, but have included the dummy sample below if anyone can assist.

data = [[10, 0.8],[5, 1.2],[12, 2.4],[6, 2.8]]
Rarblack
  • 4,559
  • 4
  • 22
  • 33
cookie1986
  • 865
  • 12
  • 27
  • If your looking up z-scores manually you have some 'X' that you are using to compare models and see how many standard deviations away the value is right ? – Karan Shishoo Oct 22 '18 at 10:29
  • also take a look at [this](https://stackoverflow.com/questions/20864847/probability-to-z-score-and-vice-versa-in-python) question that deals with zscores to probabilities using the scipy package – Karan Shishoo Oct 22 '18 at 10:36
  • Thanks @casualcoder - My question isn't fantastically worded but you've pointed me in the right direction. – cookie1986 Oct 22 '18 at 10:41

0 Answers0