A concept same as data localy on hadoop but I don't want to use hdfs.
I have 3 dask-worker .
I want to compute a big csv filename for example mydata.csv.
I split mydata.csv to small file (mydata_part_001.csv ... mydata_part_100.csv) and store in local folder /data on each worker e.g.
worker-01 store mydata_part_001.csv - mydata_part_030.csv in local folder /data
worker-02 store mydata_part_031.csv - mydata_part_060.csv in local folder /data
worker-03 store mydata_part_061.csv - mydata_part_100.csv in local folder /data
how to use dask compute to mydata ?? Thank.