I am trying to use your project named dask-spark proposed by Matthew Rocklin.
When adding the dask-spark into my project, I have a problem: Waiting for workers as shown in the following figure.
Here, I run two worker nodes (dask) as dask-worker tcp://ubuntu8:8786 and tcp://ubuntu9:8786 and run two worker nodes (spark) over a standalone model, as worker-20180918112328-ubuntu8-45764 and worker-20180918112413-ubuntu9-41972
My python code is as:
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
from dask.distributed import Client
import distributed.joblib
from sklearn.externals.joblib import parallel_backend
from dask_spark import spark_to_dask
from pyspark import SparkConf, SparkContext
from dask_spark import dask_to_spark
if __name__ == '__main__':
sc = SparkContext()
#connect to the cluster
client = spark_to_dask(sc)
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(
digits.data,
digits.target,
train_size=0.75,
test_size=0.25,
)
tpot = TPOTClassifier(
generations=2,
population_size=10,
cv=2,
n_jobs=-1,
random_state=0,
verbosity=0
)
with joblib.parallel_backend('dask.distributed', scheduler_host=' ubuntu8:8786'):
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
I will highly appreciate it if you can help me to solve this question.