I just wanna clarify that what is the difference between
roc_auc_score(y_test,results.predict(X_test))
and
roc_auc_score(y_test,results.predict_proba(X_test)[:,1])
I know the latter one return the probability of class 0 for each test observation and also in plotting out roc_curve, predict_proba() should be used. But which is the right way to check a binary classification model performance in ROC? I use the former one currently. What does the latter one mean?