I am trying to figure out how to calculate false positives and false negatives using numpy.
I am able to calculate accuracy and inaccuracy with the following:
In the following examples y_prediction is a 2d array of the predictions made on the dataset, a 2d array of 1s and 0s. Truth_labels is the 1d array of class labels associated with the feature vector, 2d array.
accurate_prediction_rate = np.count_nonzero(y_prediction == truth_labels)/truth_labels.shape[0]
inaccurate_prediction_rate = np.count_nonzero(y_prediction != truth_labels)/truth_labels.shape[0]
I then tried to calculate false positives (positives in my dataset are indicated by a 1) like so...
false_positives = np.count_nonzero((y_prediction != truth_labels)/truth_labels.shape[0] & predictions == 1)
but that returns an TypeError. I am new to using numpy and so unfamiliar with all available methods. Is there a numpy method better suited for what I am trying to do?