As per the recommendations, the file size should be in the range of 100-250MB compressed to load the files (data loading). Loading very large files (e.g. 100 GB or larger) is not recommended.
Snowpipe works in parallel mode, its used to load continuous streaming data in micro-batches in near real-time (as soon the file lands in S3, Azure blob etc). The number of data files that can be processed in parallel is determined by the number and capacity of servers/nodes in a warehouse.
For ad-hoc queries/load/one time, you can use the COPY INTO command.
Loading a single huge file is not recommended via snowpipe.
If you try to ingest a huge single file with 3 million rows, it will not be able to use parallel mode, even if you use large warehouse, it will not help boost performance because the number of load operations that run in parallel cannot exceed the number of data files to be loaded. So for single file load it will use single node from the warehouse, rest of the nodes will not be used.
So if you want to use snowpipe auto-ingest, please split the large file into smaller sizes (100-250MB). Splitting larger data files allows the load to scale linearly.
Please refer to these links for more details
https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html#general-file-sizing-recommendations
https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html