The KEEP_LAST_HISTORY
QoS setting for a DataReader limits the amount of recently received samples kept by the DataReader on a per-instance basis. As documented, for example, by RTI:
For a
DataReader
: Connext DDS attempts to keep the most recent depth DDS samples received for each instance (identified by a unique key) until the application takes them via theDataReader
'stake()
operation.
Besides valid data samples, in DDS a DataReader will also receive invalid samples, e.g. to indicate a change in liveliness or to indicate the disposal of an instance. My questions concern how the history QoS settings affect these samples:
- Are invalid samples treated the same as valid samples when it comes to a
KEEP_LAST_HISTORY
setting? For example, say I use the default setting of keeping only the latest sample (history depth of 1), and a DataWriter sends a valid data sample and then immediately disposes the instance. Do I risk missing either of the samples, or will the invalid sample(s) be handled specially in any way (e.g. in a separate buffer)? - In either case, can anyone point me to where the standard provides a definitive answer?
- Assuming the history depth setting affects all (valid and invalid) samples, what would be a good history depth setting on a keyed (and Reliable) topic, to make sure I miss neither the last datum nor the disposal event? Is this then even possible in general without resorting to
KEEP_ALL_HISTORY
?
Just in case there are any (unexpected) implementation-specific differences, note that I am using RTI Connext 5.2.0 via the modern C++ API.