I implemented this some time ago with downsampling on the fly for some graphs. The drawback is that older looses resolution, but I believe this is acceptable for you. And if you are interested in peaks you could provide max, avg, and min values.
The algorithm isn't too hard also. If you have 5 samples per second and want to hold this granularity for maybe an hour you have to store 5*60*60 = 18000 samples for this hour.
For the day you might go down to 1 sample every 5 seconds, reducing the amount by a factor of 25. The algorith would then run every 5 seconds, and calculate the median, min and max of the 5 seconds what passed 24 hours ago. Results in 12*60*23 = 16560 more samples per day, and if you store
Further back I recommend a sample every minute, reducing the amount by 12 for maybe two week, so you have 60*24*13 = 18720 more samples for two weeks data.
Special consideration should be taken for storing the data in the DB. To get max performance you should ensure data of one sensor is stored in one block in thae database. If you use e.g. PostgreSQL, you know that one block is 8192 bytes in length, and no two records are stored in one block. Assuming one sample has 4 bytes length, and considering the overhead per row I could add 2048 minus a few samples in one block. Given the highest resolution, this are 2040 / 5 / 60 = 6 minutes of data. It MIGHT be a good idea now to always add 6 minutes at once, maybe 5 to be just dummies to update in the later minutes, so the queries can fetch blocks of a single sensor faster.
At least I would use different tables for different sensor granularity.