3

I have 4 machines, M1, M2, M3, and M4. The scheduler, client, worker runs on M1. I've put a csv file in M1. Rest of the machines are workers.

When I run the program with read_csv file in dask. It gives me Error, file not found

Dhruv Kumar
  • 399
  • 2
  • 13

1 Answers1

3

When one of your workers tries to load the CSV, it will not be able to find it, because it is not present on that local disc. This should not be a surprise. You can get around this in a number of ways:

  • copy the file to every worker; this is obviously wasteful in terms of disc space, but the easiest to achieve
  • place the file on a networked filesystem (NFS mount, gluster, HDFS, etc.)
  • place the file on an external storage system such as amazon S3 and refer to that location
  • load the data in your local process and distribute it with scatter; in this case presumably the data was small enough to fit in memory and probably dask would not be doing much for you.
mdurant
  • 27,272
  • 5
  • 45
  • 74
  • So will the similar problem arise in to_csv. I mean the workers will write their portion of computed file in their machines? – Dhruv Kumar Jun 25 '18 at 04:44
  • It is possible in dask, that worker nodes fetch some (or whole file) itself from the client, when needed? – Dhruv Kumar Jun 25 '18 at 05:18
  • Yes, you can [upload files](http://distributed.readthedocs.io/en/latest/api.html#distributed.client.Client.upload_file) from the client – mdurant Jun 25 '18 at 12:56
  • Will Dask do it automatically or I have to do some configuration? – Dhruv Kumar Jun 25 '18 at 13:08
  • 1
    How would dask know to do that? Also, note that this is not the intended use of file_upload, you would be better off sorting out your own copies, if copying is the method you want to use. – mdurant Jun 25 '18 at 13:13