1

In scikit learn, I was doing multi-class classification using MultinomialNB for labelled text data. While predicting I used "predict_proba" feature of multinomialNB

clf=MultinomialNB()
print(clf.fit(X_train,Y_train))
clf.predict_proba(X_test[0])

As a result I got a vector of probability values for each class which added upto 1. I know this is because of softmax cross entropy function.

array ( [ [ 0.01245064, 0.02346781, 0.84694063, 0.03238112, 0.01833107, 0.03103464, 0.03539408 ] ] )

My question here is, while predicting I need to have binary_cross_entropy so that I get a vector of probability values for each class between 0 and 1 independent of each other. So how do i change the function while doing prediction in scikit-learn?

1 Answers1

1

You can have log likelihood for every class by using:

_joint_log_likelihood(self, X):
        """Compute the unnormalized posterior log probability of X
        I.e. ``log P(c) + log P(x|c)`` for all rows x of X, as an array-like of
        shape [n_classes, n_samples].
        Input is passed to _joint_log_likelihood as-is by predict,
        predict_proba and predict_log_proba.
        """ 

Naive Bayes predict_log_proba works simply by normalizing function above.

def predict_log_proba(self, X):
        """
        Return log-probability estimates for the test vector X.
        Parameters
        ----------
        X : array-like, shape = [n_samples, n_features]
        Returns
        -------
        C : array-like, shape = [n_samples, n_classes]
            Returns the log-probability of the samples for each class in
            the model. The columns correspond to the classes in sorted
            order, as they appear in the attribute `classes_`.
        """
        jll = self._joint_log_likelihood(X)
        # normalize by P(x) = P(f_1, ..., f_n)
        log_prob_x = logsumexp(jll, axis=1)
        return jll - np.atleast_2d(log_prob_x).T 
Farseer
  • 4,036
  • 3
  • 42
  • 61