I'm about to embark on a blockchain project for supply chain, and am investigating Hyperledger Sawtooth (https://www.youtube.com/watch?v=uBebFQM49Xk) and Hyperledger Fabric (https://drive.google.com/file/d/1OsIoPtlv5X2PWyOAlDn1FCnHCZPyrF57/view) at the moment. it appears that the abovementioned frameworks are capable of "thousands" of transactions per second (tps).
My question has to do with my planned use case. if i'm planning to track metrics of a certain supply, and i need it to be updated every minute, and there are hundreds of thousands of this supply at any point in time, how can this scale? i'm assuming that as things are queued latency increases. At 150,000 event reporting per second (which seems to be conservative based on our calculations), we would incur tens to hundreds of seconds of delay.
is my simple math usable? are there any mechanisms to work around this?
thanks