2

I found Dask recently. I have very basic questions about Dask Dataframe and other data structures.

  1. Is Dask Dataframe immutable data type?
  2. Is Dask array and Dataframe are lazy data structure?

I dont know whether to use dask or spark or pandas for my situation. I have 200 GB of data to compute. It took 9 hours to compute operations using plain python program. But it can be processed parallelly in lesser time by utilizing 16 core processor. If I split the dataframe in pandas I need to worry about commutative and associative property of my calculations. On the other hand I can use standalone spark cluster to just split up the data and run parallelly.

Do I need to setup any clusters in Dask as like as Spark?
How to run Dask dataframes in my own compute nodes?
Does Dask need master-slave setup?

I am a fan of pandas, so I am looking for solutions similar to pandas.

Hariprasad
  • 1,611
  • 2
  • 14
  • 19

1 Answers1

6

There appear to be a few questions here

Q: Are Dask.dataframes immutable?

Not strictly. They support column assignment. Generally though you're correct that most of the mutation operations of Pandas are not supported

Q: Are Dask.dataframe and Dask.array lazy?

Yes

Q: Do I need to set up a cluster?

No, you can choose to run Dask on a cluster or on a single machine.

Q: If I want to use a cluster how do I do it?

See documentation for Dask.distributed and the setup docs in particular

Q: Should I use Dask, Spark, or Pandas?

This question is overly broad and depends on the situation

MRocklin
  • 55,641
  • 23
  • 163
  • 235
  • Thanks for pointing that out. Is there a documentation how to actually setup a `Cluster`? Like for the `LocalCluster` as in the docs? Coming from https://stackoverflow.com/questions/61902582/dask-setup-a-cluster-where-multiple-machines-can-connect-to – gies0r May 19 '20 at 23:43