My pipe is executing a COPY INTO command every time a parquet file is loaded into a STAGED location in AWS S3, that's working just fine (the execution).
This is my copy query: (summarized)
copy into table_name
from (
TRY_TO_DATE(
$1:int_field::varchar,
'YYYYMMDD'
) as date_field
from @"stage-location"/path/path2/ (FILE_FORMAT = > c000)
) ON_ERROR = "SKIP_FILE_1%" PATTERN = ".*part.*"
So, I convert $1:int_field (type:int) to VARCHAR (::varchar) and then parse this varchar to DATE in 'YYYYMMDD' format. That works fine for int_field that conform to this format, but when the field is 0, the load fails (only when is executed by the pipe)
When the pipe executed the COPY COMMAND by it self I checked the COPY_HISTORY and got the following error:
Can't parse '0' as date with format 'YYYYMMDD'
And of course the load fails... FAILED LOAD
Here is when the thing gets interesting: when I execute this SAME copy command by myself in the Worksheets, load goes smoothly: OK LOAD
I tried:
VALIDATE, VALIDATION_MODE, VALIDATE_PIPE_LOAD
, but This function does not support COPY INTO statements that transform data during a load, like mine.FILE_FORMAT= (FORMAT_NAME=c000 DATE_FORMAT='YYYYMMDD') ON_ERROR = "SKIP_FILE_1%"
>>> SAME ISSUE, the file's only loaded when I execute the COPY COMMAND with my own hand.- I thought the problem was the "ON_ERROR" option, but I can't erase it (I think), I need to filter the REAL errors :(
Maybe is some SESSION problem or so, I read smthg about DATE_INPUT_FORMAT, but I can't detect the exact problem to solve this.
Can someone help me? Thanks!