0

I have a 2 node setup distributed cache setup which needs persistence setup for both members.

I have MapSore and Maploader implemented and the same code is deployed on both nodes.

The MapStore and MapLoader work absolutely ok on a single member setup, but after another member joins, MapStore and Maploader continue to work on the first member and all insert or updates by the second member are persisted to disk via the first member.

My requirement is that each member should be able to persist to disk independently so that distributed cache is backed up on all members and not just the first member.

Is there a setting I can change to achieve this.

Here is my Hazlecast Spring Configuration.

@Bean
    public HazelcastInstance hazelcastInstance(H2MapStorage h2mapStore) throws IOException{
        MapStoreConfig mapStoreConfig = new MapStoreConfig();
        mapStoreConfig.setImplementation(h2mapStore);
        mapStoreConfig.setWriteDelaySeconds(0);
        YamlConfigBuilder configBuilder=null;

        if(new File(hazelcastConfiglocation).exists()) {
            configBuilder = new YamlConfigBuilder(hazelcastConfiglocation);
        }else {
            configBuilder = new YamlConfigBuilder();
        }
        Config config = configBuilder.build();
        config.setProperty("hazelcast.jmx", "true");
        MapConfig mapConfig = config.getMapConfig("requests");
        mapConfig.setMapStoreConfig(mapStoreConfig);

        return Hazelcast.newHazelcastInstance(config);
    }

Here is my hazlecast yml config - This is placed in /opt/hazlecast.yml which is picked up by my spring config up above.

hazelcast:
    group:
      name: tsystems
    management-center:
      enabled: false
      url: http://localhost:8080/hazelcast-mancenter
    network:
      port:
        auto-increment: true
        port-count: 100
        port: 5701
      outbound-ports:
        - 0
      join:
        multicast:
          enabled: false
          multicast-group: 224.2.2.3
          multicast-port: 54327
        tcp-ip:
          enabled: true
          member-list:
            - 192.168.1.13

Entire code is available here : [https://bitbucket.org/samrat_roy/hazelcasttest/src/master/][1]

Samrat
  • 1,329
  • 12
  • 15
  • Did you confirm that your cluster is formed? Members [2] { Member [127.0.0.1]:5701 - c1ccc8d4-a549-4bff-bf46-9213e14a9fd2 this Member [127.0.0.1]:5702 - 33a82dbf-85d6-4780-b9cf-e47d42fb89d4 } – Mesut Apr 24 '20 at 16:55
  • yes , both members formed. – Samrat Apr 24 '20 at 17:07

2 Answers2

0

This might just be bad luck and low data volumes, rather than an actual error.

On each node, try the running the localKeySet() method and printing the results.

This will tell you which keys are on which node in the cluster. The node that owns key "X" will invoke the map store for that key, even if the update was initiated by another node.

If you have low data volumes, it may not be a 50/50 data split. At an extreme, 2 data records in a 2-node cluster could have both data records on the same node. If you have a 1,000 data records, it's pretty unlikely that they'll all be on the same node.

So the other thing to try is add more data and update all data, to see if both nodes participate.

Neil Stevenson
  • 3,060
  • 9
  • 11
  • Neil , I thought this was not supported by Hazlecast. They have mentioned in the documentation. Check my own answer. I will try out your solution though. – Samrat Apr 24 '20 at 17:05
  • I don't see that mention, if it's still in the latest docs can you send me a link please. More important though is it's only about proving MapStore behaviour, which isn't going to be the solution to your problem. – Neil Stevenson Apr 25 '20 at 15:11
0

Ok after struggling a lot I noticed a teeny tiny buy critical detail.

Datastore needs to be a centralized system that is accessible from all Hazelcast members. Persistence to a local file system is not supported.

This is absolutely in line with what I was observing [https://docs.hazelcast.org/docs/latest/manual/html-single/#loading-and-storing-persistent-data]

However not be discouraged, I found out that I could use event listeners to do the same thing I needed to do.

    @Component
public class HazelCastEntryListner
        implements EntryAddedListener<String,Object>, EntryUpdatedListener<String,Object>, EntryRemovedListener<String,Object>,
        EntryEvictedListener<String,Object>, EntryLoadedListener<String,Object>, MapEvictedListener, MapClearedListener {


    @Autowired
    @Lazy
    private RequestDao requestDao;

I created this class and hooked it into the config as so

MapConfig mapConfig = config.getMapConfig("requests");
        mapConfig.addEntryListenerConfig(new EntryListenerConfig(entryListner, false, true));
        return Hazelcast.newHazelcastInstance(config);

This worked flawlessly, I am able to replicate data over to both the embedded databases on each node.

My use case was to cover HA failover edge-cases. During HA failover, The slave node needed to know the working memory of the active node.

I am not using hazelcast as a cache, rather I am using as a data syncing mechanism.

Samrat
  • 1,329
  • 12
  • 15
  • 1
    Hazelcast default config is for 1 backup. In a 2-node cluster the data that node-1 owns has a backup in node-2, and vice versa. Try `kill -9` on node-1, and you will see the backup promoted to primary, and you can continue to access all data from node-2 without data loss or interruption. This may cover your case "_The slave node needed to know the working memory of the active node_". – Neil Stevenson Apr 25 '20 at 15:18
  • Alternatively, if you need all nodes to know all updates then your listener solution looks ok. Be aware that delivery is asynchronous. If key-1 on node-1 is updated and then almost immediately after key-2 on node-2 is updated, both nodes will get both events, but not necessarily in the same order. The event arriving from the other node has to pass across the network, so there is a partial ordering guarantee on the delivery sequence for events. Hopefully this doesn't matter, but it's something to be aware of. Equally, node-1's host machine could crash while node-1 is trying to send events. – Neil Stevenson Apr 25 '20 at 15:29
  • Neil, you mentioned that Hazelcast default config is for 1 backup, there a configuration option to change it? – Samrat May 02 '20 at 06:50
  • It's an easy config change but not without implications. An extra copy needs an extra node, so brings a hardware core. Write speed reduces as there are now more copies to keep aligned, so brings a performance cost. – Neil Stevenson May 03 '20 at 09:03