1

I want to be able to add a timestamp the filename I'm writing to s3. So far I've been able to write files to AWS S3 using example below. Can someone guide me as to how do I go about putting datetime stamp in the file name?

copy into     @s3bucket/something.csv.gz
from (select * from  mytable)
file_format = (type=csv  FIELD_OPTIONALLY_ENCLOSED_BY = '"' compression='gzip' )
single=true
header=TRUE;

Thanks in advance.

codeBarer
  • 2,238
  • 7
  • 44
  • 75
  • Are you not able to specify the file name here? @s3bucket/datetime_something.csv.gz Or are you trying to do it dynamically? – Suzy Lockwood May 04 '20 at 18:19
  • I want to be able to do it dynamically, so that when the script runs it'll automatically create that datetime_something.csv – codeBarer May 04 '20 at 19:52

3 Answers3

1

The syntax for defining a path inside of a stage or location portion of the COPY INTO statement does not allow for functions to dynamically define it in SQL.

However, you can use a stored procedure to accomplish building dynamic queries, using JavaScript Date APIs and some string formatting.

Here's a very trivial example for your use-case, with some code adapted from another question:

CREATE OR REPLACE PROCEDURE COPY_INTO_PROCEDURE_EXAMPLE()
RETURNS VARIANT
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
    var rows = [];

    var n = new Date();
    // May need refinement to zero-pad some values or achieve a specific format
    var datetime = `${n.getFullYear()}-${n.getMonth() + 1}-${n.getDate()}-${n.getHours()}-${n.getMinutes()}-${n.getSeconds()}`;

    var st = snowflake.createStatement({
        sqlText: `COPY INTO '@s3bucket/${datetime}_something.csv.gz' FROM (SELECT * FROM  mytable) FILE_FORMAT=(TYPE=CSV  FIELD_OPTIONALLY_ENCLOSED_BY='"' COMPRESSION='gzip') SINGLE=TRUE HEADER=TRUE;`
    });

    var result = st.execute();
    result.next();
    rows.push(result.getColumnValue(1))

    return rows;
$$

To execute, run:

CALL COPY_INTO_PROCEDURE_EXAMPLE();

The above is missing perfected date format handling (zero padding months, days, hours, minutes, seconds), error handling (if the COPY INTO fails), parameterisation of input query, etc. but it should give a general idea on how to achieve this.

1

As Sharvan Kumar suggests above, Snowflake now support this:

-- Partition the unloaded data by date and hour. Set ``32000000`` (32 MB) as the upper size limit of each file to be generated in parallel per thread.
copy into @%t1
  from t1
  partition by ('date=' || to_varchar(dt, 'YYYY-MM-DD') || '/hour=' || to_varchar(date_part(hour, ts))) -- Concatenate labels and column values to output meaningful filenames
  file_format = (type=parquet)
  max_file_size = 32000000
  header=true;

list @%t1
Stephen Pace
  • 193
  • 1
  • 13
0

This features is not supported yet in snowflake, however will be coming soon.

Sharvan Kumar
  • 179
  • 1
  • 2
  • 9