0

When I went through first learning steps with Kinesis, Firehose, and Redshift today, I was pleased to discover that Amazon had a "try our demo data producer" setup.

I was frustrated to learn that it does not seem to actually work.

So, I went digging. And I found STL_LOAD_ERROR contained errors suggesting that a delimiter was expected, and records' fronts that looked like {field:val,field:val}{field:val,field:val}.

...{"TICKER_SYMBOL": |     1214 | Delimiter not found 

"Must be stripping newlines somewhere," I thought.

After digging, I found that there are production records in the relevant S3 bucket, in a surprising format:

{field:val,field:val}{field:val,field:val}...

That is, there are no delimiters between the apparent records, which are single line files of several dozen K each.

Other SO posts seem to suggest that this is actually the expected data format.

Why does Redshift need data in a format the data demo doesn't use? Which do I reconfigure?

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
John Haugeland
  • 9,230
  • 3
  • 37
  • 40
  • Two things learned so far. 1) Example data is in JSON. JSON needs to be turned on as a COPY command. JSON as 'auto' 2) Example data has upper case keys. Redshift has lower case only, case sensitive column names. Need JSONpaths. – John Haugeland Jan 05 '17 at 07:51

2 Answers2

2

Okay. There were three problems.

  1. The AWS example data producer produces data in a format which needs alterations to the Redshift COPY command, but they don't tell you that.
  2. You need to add FORMAT AS JSON 's3://yourbucketname/aJsonPathFile.txt'
  3. You need to create a JSON path file because the default data producer produces upper-case column names, which redshift cannot consume
John Haugeland
  • 9,230
  • 3
  • 37
  • 40
0

I also just ran into this issue. The root of issue is as you pointed out, the data generated has column names which are all upper-case, while the table creation code snippet has all lower-case column names.

The easiest solution I found was to change COPY command suggested (in Step 2) from "json 'auto'", to include 'ignorecase':

json 'auto ignorecase'

This should solve the error with data generation without needing to create a custom JSON path file.

You can see here for the documentation on this syntax.