1

Assume I have a Redshift table my_schema.my_table that contains a record as such:

id   col1   my_list
-------------------
1    true   [{"id": "A", "name": "foo"}, {"id": "B", "name": "bar"}]

The column my_list is a SUPER field. My understanding is that I would need to somehow iterate over the array, using get_array_length(my_list) and I recognize I can extract elements as such:

select
  my_list[0]."id",
  my_list[0]."name"
from my_schema.my_table

Is there a simple Redshift method for doing this and ultimately achieving a table that looks as such:

id   name
---------
A    foo
B    bar

Important to note, that this should happen dynamically. I will not know the size of the array. But the same key value pairs will always exist, i.e. id and name.


EDIT: Here is my best guess.

SELECT
  my_list[idx]."id",
  my_list[idx]."name"
FROM my_schema.my_table
CROSS JOIN (
  SELECT generate_series(0, GET_ARRAY_LENGTH(my_list) - 1) AS idx
  FROM my_schema.my_table
) AS idx_table

but I am getting the error:

[0A000] ERROR: Specified types or functions (one per INFO message) not supported on Redshift tables.

sgerbhctim
  • 3,420
  • 7
  • 38
  • 60
  • Does this answer your question? [Return elements of Redshift JSON array on separate rows](https://stackoverflow.com/questions/30195760/return-elements-of-redshift-json-array-on-separate-rows) – nbk Feb 22 '23 at 20:54
  • No, this is not what I am looking for. – sgerbhctim Feb 22 '23 at 20:59
  • why not, you have a json and need to unest it, as you can read readsiht has not that much json support – nbk Feb 22 '23 at 21:06

1 Answers1

2

Yes, this is possible within Redshift. You would first need to iterate over my_list and create a single row per list item. Once you do that, you are able to parse out whatever fields you need from each individual array.

with cte as (
    SELECT item
    FROM my_schema.table t, t.my_list AS item AT index
)
select item.{whatever-key} from cte