Hi everyone I was in a bit of conflict today on wether to use graphQL or not. The situation is I am working on a really large database and at this point I was wondering if there's a limit to consider in aggregation queries (where you get the count) or is it viable to do this considering this DB contains millions of records. Any comments and suggestions are welcome, thanks!
Asked
Active
Viewed 1,022 times
1 Answers
1
Hasura doesn't handle aggregates by itself, it delegates almost all operations to the underlying database.
Example – I've a table with ~12M rows, when I execute this aggregate query:
{
price_history_aggregate {
aggregate {
count
}
}
}
The exact generated SQL is:
SELECT
json_build_object(
'aggregate',
json_build_object('count', COUNT(*))
) AS "root"
FROM
(
SELECT
FROM
(
SELECT
*
FROM
"public"."price_history"
WHERE
('true')
) AS "_0_root.base"
) AS "_1_root"
As you can see, this SQL is already building the JSON response: {"aggregate" : {"count" : 12284957}}
.
Now, running this query takes a couple seconds, there's some discussion on PostgreSQL slowness for counting (https://wiki.postgresql.org/wiki/Slow_Counting) but that has nothing to do with Hasura or GraphQL.

Renato
- 826
- 9
- 9