Is there a way to directly convert a Spark dataframe to a Dask dataframe.?
I currently am using Spark's .toPandas() function to convert it into a pandas dataframe and then into a dask dataframe. I believe this is inefficient operation and is not utilizing dask's distributed processing capabilities,since pandas will always be the bottleneck.