(I was going to post this on the DBA SE, but there's no timescale tag.)
I'm using TimescaleDB to store a real-time stream of about 500-1000 inserts per second, and need to pass this data to a client application as soon as it comes in.
I don't necessarily want to process the raw millisecond data (I may), but I definitely need the client to receive 1, 5, 10, 30 second (etc.) aggregations of this live data, as each time bucket/slot is ready. So, every second, every 5 seconds, every 10 seconds, etc. (the values would be things like max, min, etc. for each time slot).
I was going to use Postgres' triggers to aggregate the data and put it in a new table in a waterfall type manner, and use listen/notify to tell the client when each time chunk is "ready", but then I came across PipelineDB.
However, it's not clear to me exactly how to set that architecture up. Do I want PipelineDB to get the initial data stream, and then pass it to the Timescale hyper table? Do I want Timescale to pass the data to Pipeline? I assume Pipeline would be the point of contact for the client.
What would the most performant general architecture be; and how would I pass data between Timescale and Pipeline?