I am trying to obtain the std of this output using numpy.std()
[[array([0.92473118, 0.94117647]), array([0.98850575, 0.69565217]), array([0.95555556, 0.8 ]), 0.923030303030303], [array([0.85555556, 0.8 ]), array([0.95061728, 0.55172414]), array([0.9005848 , 0.65306122]), 0.8353285811932428]]
To obtain that output I used the code (it goes through a loop, in this example, it went through two iterations)
precision, recall, fscore, support = precision_recall_fscore_support(np.argmax(y_test_0, axis=-1), np.argmax(probas_, axis=-1))
eval_test_metric = [precision, recall, fscore, avg_fscore]
test_metric1.append(eval_test_metric)
std_matrix1 = np.std(test_metric1, axis=0)
I would like to get an output similar in structure to when I do np.mean()
, Please excuse the 'precision', 'recall' I just made that in my code for clarity.
dr_test_metric = dict(zip(['specificity avg', 'sensitivity avg', 'ppv avg', 'npv avg'], np.mean(test_metric2, axis=0)))
print(dr_test_metric,'\n')
output, (where 0.89014337 in 'precision avg': array([0.89014337, 0.87058824] is the average of precision of class 0 for my model and 0.8705 is the average of the precision for class 1 for my model)
{'precision avg': array([0.89014337, 0.87058824]), 'recall avg': array([0.96956152, 0.62368816]), 'fscore avg': array([0.92807018, 0.72653061]), 'avg_fscore avg': 0.8791794421117729}