2

I am using linearsvc from scikit for a 3-class dataset. Using one-vs-rest (default) strategy, I get exactly 3 hyperplanes weights vectors each of size number_of_features_in_dataset. Now, final prediction is based on combination of all 3 hyperplanes coefficients, but I what I want is to exclude say, 2nd hyperplane from making any contribution to final decision.

I searched and found that internally multiple hyperplanes vote and make final classification and in cases of tie, distance from individual hyperplane is considered.

clf = LinearSVC()
clf.fit(x_train,y_train)
y_predict = clf.predict(x_test)
print(clf.coef_) # This prints 3xnos_of_features, where each row represents hyperplane weights
#I want to exclude say 2nd hyperplane from affecting decision made in in line 3
Newbee
  • 45
  • 8
  • You could predict using the hyperplane distance, and then manually override one of the hyperplanes. – Benjamin Breton May 15 '19 at 08:37
  • @BenjaminBreton I am not sure how internally how they aggregate distances internally, I guess they use Platt Scaling or something similar but not what exactly. – Newbee May 15 '19 at 08:39

1 Answers1

0

You can manually add a bias to each hyperplane, to favor one of the classes:

from sklearn.svm import LinearSVC
from sklearn.preprocessing import LabelEncoder
import numpy as np

import warnings
warnings.filterwarnings(module='sklearn*', action='ignore', category=DeprecationWarning)


class BiasedSVC(LinearSVC):

    def __init__(self, penalty='l2', loss='squared_hinge', dual=True, tol=1e-4,
                 C=1.0, multi_class='ovr', fit_intercept=True,
                 intercept_scaling=1, class_weight=None, verbose=0,
                 random_state=None, max_iter=1000, classes=None, biases=None
                 ):
        """
        Same as LinearSVC: (https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/classes.py)
        But allowing to bias the hyperplane to favor a class over another. Works for multiclass classification
        :param classes: list of the classes (all the classes myst be present in y during training).
        :type classes: list of strings
        :param biases: list of biases in the alphabetical order of the classes (ex: [0.0, +0.1, -0.1]) or dict
        containing the weights by class (ex: {"class_1": 0.0, "class_2": +0.1, "class_3": -0.1})
        :type biases: list of floats or dict
        """

        super().__init__(penalty, loss, dual, tol, C, multi_class, fit_intercept, intercept_scaling, class_weight,
                         verbose, random_state, max_iter)

        # Define new variables
        self.classes = classes
        self.biases = biases

        # Transtype Biases
        self._biases = self.get_biases(self.biases)

        # Create Norm variable
        self._w_norm = None

        # Create LabelEncoder
        self._le = LabelEncoder()

    def get_biases(self, biases):
        """ Transtype the biases to get a list of floats """
        if isinstance(biases, list):
            return biases
        elif isinstance(biases, dict):
            return [biases[class_name] for class_name in self.classes]
        else:
            return [0.0 for _ in self.classes]

    def get_w_norm(self):
        """ Get the norm of the hyperplane to normalize the distance """
        self._w_norm = np.linalg.norm(self.coef_)

    def fit(self, X, y, sample_weight=None):

        # Fit the label Encoder (to change labels to indices)
        self._le.fit(y)

        # Fit the SVM using the mother class (LinearSVC) fit method
        super().fit(X, y, sample_weight)

        # Record the norm for all the hyperplanes (useful during inference)
        self.get_w_norm()

    def predict(self, X):
        """ Performa a prediction with the biased hyerplane """

        # Get the decision output (distance to the the hyperplanes separating the different classes)
        decision_y = self.decision_function(X)

        # Add the bias to each hyperplane (normalized for each class)
        dist = decision_y / self._w_norm + self._biases

        # Return the corresponding class
        return self._le.inverse_transform(np.argmax(dist, axis=1))

Note: You don't use the biases during the training, only during the predict, as the SVC would translate the hyperplanes to compensate your biases during training.

Benjamin Breton
  • 1,388
  • 1
  • 13
  • 42