0

I need a database to store pairs of key - array rows like below:

===== TABLE: shoppingCart =====
user_id - product_ids
1       - [1, 2, 3, 4]
2       - [100, 200, 300, 400]

and I want to be able to update a row with new array merging to the old one while skipping duplicate values. i.e, I want operations like:

UPDATE shoppingCart SET product_ids = UNION(product_ids, [4, 5, 6]) WHERE user_id = 1

to result the first row's product_ids column to become:

[1, 2, 3, 4, 5, 6]

I also need operations like selecting a sub-array, e.g. :

SELECT product_ids[0:2] from shoppingCart

which should result:

[1,2]

any suggestions for best database for such purposes?

the arrays I need to work with are usually long (containing about 1,000 - 10,000 values of long integers ( or string version of long integers) )

Ramtin Didab
  • 21
  • 1
  • 3
  • If this is strictly for integers (no bigint), the Postgres' [intarray extension](https://www.postgresql.org/docs/current/intarray.html#id-1.11.7.27.6) would let you do that as it has an "union" operator: `product_ids = product_ids | array[4,5,6]`. But I don't think a relational database would be a good idea for that to begin with –  Apr 29 '20 at 14:46
  • @a_horse_with_no_name well, thank you, but I need to specially work with BIGINT s, doesn't Portgresql support BIGINT arrays?! – Ramtin Didab Apr 30 '20 at 05:19
  • @a_horse_with_no_name and yes, I know relational databases may not be good choice for such a problem, but couldn't find any alternative in NoSql databases yet. – Ramtin Didab Apr 30 '20 at 05:21
  • Postgres does allow bigint arrays, but the union operator that the extension provides, only works with integer arrays. It's not really hard to write a function/operator (in SQL) to do the same with bigint though. The question is whether that will be fast enough –  Apr 30 '20 at 05:42

0 Answers0