0

create or replace stage elasticsearch_dev url='s3://s3bucket/ElasticSearch' credentials=(aws_role='arn:aws:iam::XXXXXXX:role/role_snowflake')

copy into @elasticsearch_dev/test/SAMPLE.json from (select To_JSON(object_construct(*)) from Sample) file_format = (type = json), overwrite=TRUE;

I'm unloading the sample table to JSON format in s3, when I look into S3 the file is compressed as SAMPLE.json_0_0_0.json.gz

The s3 file should not get compressed, should be like SAMPLE.json_0_0_0.json.

How can I achieve that?

mkrieger1
  • 19,194
  • 5
  • 54
  • 65
Sundar
  • 95
  • 1
  • 13

2 Answers2

1

Compressing is actually a good practice. I am sure you have a use case not to have compression. I have not tried this yet, but looks like under formatTypeOptions, you can disable compression as below by setting COMPRESSION to NONE

-- If FILE_FORMAT = ( TYPE = JSON ... ) COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

Rajib Deb
  • 1,496
  • 11
  • 30
0

use the compression=none parameter

It is explained in the Snowflake documentation along with all the other parameters you can use: https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#type-json

NickW
  • 8,430
  • 2
  • 6
  • 19