0

Environment

  1. Infinispan 9.4.18 embedded
  2. 3 nodes cache in replicated mode
  3. RocksDB store (or other, doesn't matter)

Steps to reproduce

  1. Create TCP-based cluster
  2. Create cache
  3. Add Entity to cache
  4. Check that entity is stored on each node
  5. Stop non-coordinator node
  6. Remove Entity from cache on coordinator
  7. Check that Entity is removed on running nodes
  8. Start previously stopped non-coordinator node
  9. Check that Entity is present on restarted node but is absent on remaining nodes. But expected behavior is to replicate removing to restarted node too.

Questions 1. Is this behavior OK? 2. Can I change it to indicated as expected? 3. And how if so?

g.orlov
  • 1
  • 1

1 Answers1

0

Infinispan does not replicate removals to a restarted node. The workaround is to remove all the entries in the restarted node's stores before starting, by configuring the store with purge="true".

Dan Berindei
  • 7,054
  • 3
  • 41
  • 48
  • Thanks for answer! But it is not useful for me. Because cluster with this setting can't survive simultaneous restart of all nodes. – g.orlov Apr 24 '20 at 15:07
  • Yes, removing all the data on cluster restart is a big limitation. Unfortunately there's no other built-in option, you'll have to remove the store data from an external script before restarting the node. – Dan Berindei Apr 27 '20 at 10:35
  • Or implement other solution. Do not remove removed records but mark them as removed. And schedule a task that will periodically remove such marked records on each node. Yes, it will make impact on range read but will repair consistency problem. – g.orlov May 07 '20 at 06:34