3

I have heard people saying you can adjust the threshold to tweak the trade-off between precision and recall, but I can't find actual example of how to do that.

My code:

for i in mass[k]:
    df = df_temp # reset df before each loop
    #$$
    #$$ 
    if 1==1:
    ###if i == singleEthnic:
        count+=1
        ethnicity_tar = str(i) # fr, en, ir, sc, others, ab, rus, ch, it, jp
        # fn, metis, inuit; algonquian, iroquoian, athapaskan, wakashan, siouan, salish, tsimshian, kootenay
        ############################################
        ############################################

        def ethnicity_target(row):
            try:
                if row[ethnicity_var] == ethnicity_tar:
                    return 1
                else:
                    return 0
            except: return None
        df['ethnicity_scan'] = df.apply(ethnicity_target, axis=1)
        print '1=', ethnicity_tar
        print '0=', 'non-'+ethnicity_tar

        # Random sampling a smaller dataframe for debugging
        rows = df.sample(n=subsample_size, random_state=seed) # Seed gives fixed randomness
        df = DataFrame(rows)
        print 'Class count:'
        print df['ethnicity_scan'].value_counts()

        # Assign X and y variables
        X = df.raw_name.values
        X2 = df.name.values
        X3 = df.gender.values
        X4 = df.location.values
        y = df.ethnicity_scan.values

        # Feature extraction functions
        def feature_full_name(nameString):
            try:
                full_name = nameString
                if len(full_name) > 1: # not accept name with only 1 character
                    return full_name
                else: return '?'
            except: return '?'

        def feature_full_last_name(nameString):
            try:
                last_name = nameString.rsplit(None, 1)[-1]
                if len(last_name) > 1: # not accept name with only 1 character
                    return last_name
                else: return '?'
            except: return '?'

        def feature_full_first_name(nameString):
            try:
                first_name = nameString.rsplit(' ', 1)[0]
                if len(first_name) > 1: # not accept name with only 1 character
                    return first_name
                else: return '?'
            except: return '?'

        # Transform format of X variables, and spit out a numpy array for all features
        my_dict = [{'last-name': feature_full_last_name(i)} for i in X]
        my_dict5 = [{'first-name': feature_full_first_name(i)} for i in X]

        all_dict = []
        for i in range(0, len(my_dict)):
            temp_dict = dict(
                my_dict[i].items() + my_dict5[i].items()
                )
            all_dict.append(temp_dict)

        newX = dv.fit_transform(all_dict)

        # Separate the training and testing data sets
        X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=testTrainSplit)

        # Fitting X and y into model, using training data
        classifierUsed2.fit(X_train, y_train)

        # Making predictions using trained data
        y_train_predictions = classifierUsed2.predict(X_train)
        y_test_predictions = classifierUsed2.predict(X_test)

I tried replacing the line "y_test_predictions = classifierUsed2.predict(X_test)" with "y_test_predictions = classifierUsed2.predict(X_test) > 0.8" and "y_test_predictions = classifierUsed2.predict(X_test) > 0.01", nothing changes drastically.

user2314737
  • 27,088
  • 20
  • 102
  • 114
KubiK888
  • 4,377
  • 14
  • 61
  • 115
  • Thanks DoughnutZombie, could you tell me how to grey-highlight the text? – KubiK888 Feb 29 '16 at 04:43
  • To mark inline code use the backtick ` at the start and end. Also see http://stackoverflow.com/editing-help e.g. at the very bottom "comment formatting". – Robin Spiess Feb 29 '16 at 11:49
  • 1
    To your question: What classifier do you use? Instead of `predict` does the classifier have `predict_proba`? Because predict usually only outputs 1s and 0s. `predict_proba` outputs a float which you can threshold. – Robin Spiess Feb 29 '16 at 11:51
  • I used logistic reg and svm – KubiK888 Feb 29 '16 at 14:53

1 Answers1

5

classifierUsed2.predict(X_test) only outputs the predicted class (most likely 0s and 1s) for each sample. What you want is classifierUsed2.predict_proba(X_test) which outputs a 2d array with probabilities for each class per sample. To do the thresholding you can do something like:

y_test_probabilities = classifierUsed2.predict_proba(X_test)
# y_test_probabilities has shape = [n_samples, n_classes]

y_test_predictions_high_precision = y_test_probabilities[:,1] > 0.8
y_test_predictions_high_recall = y_test_probabilities[:,1] > 0.1

y_test_predictions_high_precision will contain samples which are fairly certain to be of class 1 while y_test_predictions_high_recall will predict class 1 more often (and achieve a higher recall) but will also contain many false positives.

predict_proba is supported by both classifiers you use, logistic regression and SVM.

Robin Spiess
  • 1,480
  • 9
  • 17