Suppose your features are assigned to a list called feature_labels,
You may print the feature importance as follows,
for feature in zip(feature_labels, rf.feature_importances_):
print(feature)
The scores above are the importance scores for each variable. There thing to remember here is all the importance scores add up to 100%.
Inorder to identify and select most important features,
# Create a selector object that will use the random forest classifier to identify
# features that have an importance of more than 0.15
sfm = SelectFromModel(rf, threshold=0.15)
# Train the selector
sfm.fit(X_train, y_train)
'''SelectFromModel(estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=10000, n_jobs=-1, oob_score=False, random_state=0,
verbose=0, warm_start=False),
prefit=False, threshold=0.15)'''
# Print the names of the most important features
for feature_list_index in sfm.get_support(indices=True):
print(feature_labels[feature_list_index])
This will print out your most important feature names based on the threshold setting.