I noticed intarray allows an index type for very large INT arrays (gist__intbig_ops), but is there an option for BIGINT[] arrays (gist__bigint_ops)? It will be a small number of BIGINT[] values per record.
Asked
Active
Viewed 2,959 times
1 Answers
5
Try a GIN
index. No extension is needed.
CREATE INDEX index_name ON tablename USING gin(bigint_array_column);

Daniel Vérité
- 58,074
- 15
- 129
- 156
-
Im slightly concerned about the potential performance impact of using GIN. It's on a pretty big table (7 million records) with occasional 1000+/sec inserts, although there are no updates. Does the performance loss from using GIN come from inserts or updates or both? – Nick Sep 04 '13 at 00:39
-
@Nick: it's both. By default GIN uses a [FASTUPDATE](http://www.postgresql.org/docs/current/static/gin-implementation.html#GIN-FAST-UPDATE) method that mitigates this, and if your arrays are small, I don't see 1000 inserts/sec in spikes being a problem. But of course you should test it. – Daniel Vérité Sep 04 '13 at 12:16
-
What about https://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns saying gin__int_ops is required as well? Is there a bigint equivalent? – Marius Aug 13 '18 at 07:58
-
@MariusAndreiana: it's not _required_ as shown in the initial answer, but it's plausible that the commenter had much better results with this opclass. It's mentioned in the `intarray` documentation. I'm not aware of an equivalent for bigint. – Daniel Vérité Aug 13 '18 at 13:17