I am using RandomForestClassifier on CPU with SKLearn and on GPU using RAPIDs. I am doing a benchmark between these two libraries about speed up and scoring using Iris dataset (it is a try, in the future, I will change the dataset for a better benchmarking, I am starting with these two libraries).
The problem is when I measure the score on CPU always get a value of 1.0 but when I try to measure the score on GPU I get a variable value between 0.2 and 1.0 and I do not understand why could be it happening.
First of all, libraries version I am using are:
NumPy Version: 1.17.5
Pandas Version: 0.25.3
Scikit-Learn Version: 0.22.1
cuPY Version: 6.7.0
cuDF Version: 0.12.0
cuML Version: 0.12.0
Dask Version: 2.10.1
DaskCuda Version: 0+unknown
DaskCuDF Version: 0.12.0
MatPlotLib Version: 3.1.3
SeaBorn Version: 0.10.0
The code I use for SKLearn RandomForestClassifier is:
# Read data in host memory
host_s_csv = pd.read_csv('./DataSet/iris.csv', header = 0, delimiter = ',') # Get complete CSV
host_s_data = host_s_csv.iloc[:, [0, 1, 2, 3]].astype('float32') # Get data columns
host_s_labels = host_s_csv.iloc[:, 4].astype('category').cat.codes # Get labels column
# Plot data
#sns.pairplot(host_s_csv, hue = 'variety');
# Split train and test data
host_s_data_train, host_s_data_test, host_s_labels_train, host_s_labels_test = sk_train_test_split(host_s_data, host_s_labels, test_size = 0.2, random_state = 0)
# Create RandomForest model
sk_s_random_forest = skRandomForestClassifier(n_estimators = 40,
max_depth = 16,
max_features = 1.0,
random_state = 10,
n_jobs = 1)
# Fit data in RandomForest
sk_s_random_forest.fit(host_s_data_train, host_s_labels_train)
# Predict data
sk_s_random_forest_labels_predicted = sk_s_random_forest.predict(host_s_data_test)
# Check score
print('accuracy_score: ', sk_accuracy_score(host_s_labels_test, sk_s_random_forest_labels_predicted))
The code I use for RAPIDs RandomForestClassifier is:
# Read data in device memory
device_s_csv = cudf.read_csv('./DataSet/iris.csv', header = 0, delimiter = ',') # Get complete CSV
device_s_data = device_s_csv.iloc[:, [0, 1, 2, 3]].astype('float32') # Get data columns
device_s_labels = device_s_csv.iloc[:, 4].astype('category').cat.codes # Get labels column
# Plot data
#sns.pairplot(device_s_csv.to_pandas(), hue = 'variety');
# Split train and test data
device_s_data_train, device_s_data_test, device_s_labels_train, device_s_labels_test = cu_train_test_split(device_s_data, device_s_labels, train_size = 0.8, shuffle = True, random_state = 0)
# Use same data as host
#device_s_data_train = cudf.DataFrame.from_pandas(host_s_data_train)
#device_s_data_test = cudf.DataFrame.from_pandas(host_s_data_test)
#device_s_labels_train = cudf.Series.from_pandas(host_s_labels_train).astype('int32')
#device_s_labels_test = cudf.Series.from_pandas(host_s_labels_test).astype('int32')
# Create RandomForest model
cu_s_random_forest = cusRandomForestClassifier(n_estimators = 40,
max_depth = 16,
max_features = 1.0,
n_streams = 1)
# Fit data in RandomForest
cu_s_random_forest.fit(device_s_data_train, device_s_labels_train)
# Predict data
cu_s_random_forest_labels_predicted = cu_s_random_forest.predict(device_s_data_test)
# Check score
print('accuracy_score: ', cu_accuracy_score(device_s_labels_test, cu_s_random_forest_labels_predicted))
And an example of the iris dataset I am using is:
Do you know why could be it happening? Both models are set equal, same parameters,... I have no idea why this big difference between scores.
Thank you.