We have a postgres db with the timescaledb plugin to store time-based sensor data in the amounts of gigabytes and constantly growing. There are two tables in the db with indexes, foreign keys and everything as for a classic db for a customer database or whatever. Question is what performance options exist to make insert performance and query performance fast.
The sensor data consists of a sensor id, a timestamp, and a float value. The sensor id in turn consists of an id and a breadcrumb string that is typically around 50 chars long, but sometimes can be a little longer. Inserting this breadcrumb to the table for the sensor data would blow up the table in size quite a bit. Therefore the sensor id is in a separate table.
The sensor data table has a foreign key to the sensor id table and an index on sensor id and time.
Inserts happen very frequently from many thousand devices every second. So insert performance is important. Sensor data is never changed and deletes don't happen other than for purging old data. Few users at times execute queries to investigate the sensor data. The application runs periodically queries to investigate whether current and previous sensor data has changed.
Question is now how to optimize the database for this use case. One idea is to remove foreign keys to improve insert performance. I'm not sure whether query performance is really unaffected by this or by what extend. The sensor database has unique key constraint on sensor id and time. Problem is that the existing indices are needed for the queries and hence cannot be purged.
Any ideas appreciated on how to tune things for this scenario.