0

I'm using a Postgres database to keep track of item data across many different groups. Each group (A, B, C, ...) has items with IDs but with different variations on some of their properties. For that reason, I was going to use an array to track item IDs with their corresponding stat and group ID.

However, I've read arrays can be a major slowdown with Postgres. Is there a specific way I should be using arrays with Postgres? I would need to be comparing an item's stats from different groups repeatedly. Or is this kind of setup fine? There will be ~5000 groups, ~50,000 unique items, and ~4 stats on each item.

So given item A, the data would be like:

ITEM A:
[group=A, statA=592, statB=128, statC=120, statD=9]
[group=B, statA=999, statB=12, statC=491, statD=99]
...
Gurwinder Singh
  • 38,557
  • 6
  • 51
  • 76
Max
  • 597
  • 7
  • 21
  • 2
    It is not a performance issue. It is a normalization one. The array will make everything harder. Just do proper normalization. If you don't know what that means then ask how to do it. – Clodoaldo Neto Feb 16 '17 at 19:00
  • Possible duplicate of [Postgresql - performance of using array in big database](http://stackoverflow.com/questions/11791698/postgresql-performance-of-using-array-in-big-database) – Schwern Feb 16 '17 at 19:50

0 Answers0