I want to run a dsbulk unload command, but my cassandra cluster has ~1tb of data in the table I want to export. Is there a way to run the dsbulk unload command and stream the data into s3 as opposed to writing to disk?
Im running the following command in my dev environment, but obviously this is just writing to disk on my machine
bin/dsbulk unload -k myKeySpace -t myTable -url ~/data --connector.csv.compression gzip