You have to decide what is important to you and design your table(s) to match your use cases.
You say you want to query the last value for every sensor and that there are 2000+ sensors. What will you do with these 2000+ values? How often do you need these values and can the values be slightly out of date?
One solution would be to have two tables: one where you append historical values (time series data) and another table where you always update the most recent reading for each sensor. When you need the most recent sensor data, just scan this second table to get all your sensors’ most recent values. It's as efficient as it gets for reads. For writes, it means you have to write twice for each sensor update.
The other potential solution would be to write your time series data partitioned by time, as opposed to the sensor ids. Assuming all sensors are updated at each time point, with a single query you can get the value of all sensors. This works but only if you update the vales of all sensors every time, and only if you do it with regular cadence.
However, if you update all sensors at once, then further optimizations may be had by combining multiple sensor readings into a single item, therefore requiring less writes to update all 2000 of them.