The issue might be that you are partitioning your data on days and that your bulk insert CSV has too many days in it. Try removing the PARTITION BY toYYYYMMDD(business_ts) specification in your table creation. I noticed a similar issue when inserting into one of my tables. Prior to adding a --max_memory_usage argument I was getting exactly the same error you are reporting here: Code: 210. DB::NetException: Connection reset by peer, while writing to socket (127.0.0.1:9000)
I then added --max_memory_usage=15000000000 and I received a more helpful error message:
Received exception from server (version 20.11.5):
Code: 252. DB::Exception: Received from localhost:9000. DB::Exception: Too many partitions for single INSERT block (more than 100). The limit is controlled by 'max_partitions_per_insert_block' setting. Large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc)..
As the more helpful error message points out, PARTITION is not there to help improve SELECT performance. It's really there to help facilitate non-querying manipulations more efficiently. I don't know all the details of your use-case here, but perhaps it may make sense to ORDER BY both spin_ts and business_ts and drop the PARTITION on business_ts.