0

I'm consuming incoming data stream of 20-25 packets per second per sensor with floating values (e.g. temperature) and want to accumulate and produce a 1 per second outcoming data stream. I'm looking for a (ready to use) data-structure to avoid fragmentation.

In other words, I guess hash tables and vectors are not suitable over here, while an allocator-aware should be used (along with a memory pool ??).

fiorentinoing
  • 948
  • 12
  • 20
  • 3
    It sounds like you want a circular buffer? – AVH Mar 16 '20 at 11:05
  • why you talk about circular buffer? – fiorentinoing Mar 16 '20 at 11:08
  • Because you could create a buffer of e.g. 50 element and push all your packets in there. Every second, you'd generate an output. The elements that get overwritten, would already be older than 1 second, so you wouldn't need them anymore. So you'd never need to allocate new storage. – AVH Mar 16 '20 at 11:10
  • 2
    Why would a vector be bad here? – Surt Mar 16 '20 at 11:10
  • @Darhuuk every second I want a view of the currrent status (i.e. an average value) to send to output. So nothing circular would help, it might be the vector itself do the job. – fiorentinoing Mar 16 '20 at 11:24
  • 1
    From the sound of it, you don't even need a vector then. Just sum all incoming packets and every second output that sum divided by the number of packets. Then reset your packet sum & counter. – AVH Mar 16 '20 at 11:25
  • @Surt I thought fragmentation issues will arise with std::vector, but I misunderstood the actual usage in which std::vector suffers of this issue. – fiorentinoing Mar 16 '20 at 11:29
  • @Darhuuk this cannot be the case, bacuase I would use float and epsilon machine can enter on the fiels for a great number of packets – fiorentinoing Mar 16 '20 at 11:31
  • 1
    @fiorentinoing It seems like your question is missing a lot of necessary information for a proper answer then :). – AVH Mar 16 '20 at 11:38
  • 1
    "I would use float and epsilon machine can enter on the fiels for a great number of packets". What does this mean? – geza Mar 16 '20 at 12:54
  • @geza you normally need a running average strategy to sum up numbers with a given precision. floating point epsilon is roughly 1e-7 so you have "only" 7 digits of precision. In case of thousands messages to sum up can become an issue, but this is a completely different cattle of fish.. – fiorentinoing Mar 16 '20 at 14:21
  • @Darhuuk my concerns were about fragmentation with std::vector. It turns out that in my specific case, I populate the "buffer", sum up and clear, so I never free/deallocate memory (just clear), so no fragmentation can occur. – fiorentinoing Mar 16 '20 at 14:24
  • @fiorentinoing: then why do you simplify the problem to the extent that the trivial solution doesn't work anymore (i.e, just sum the numbers, and after a second, set to zero, like Darhuuk suggested)? – geza Mar 16 '20 at 15:55

0 Answers0