1

Having a Hazelcast cluster, how do I make sure that the changes I made to an IMap are completely propagated to QueryCaches on all of the cluster nodes, before I invoke an EntryProcessor that uses the changed data from those QueryCaches?

A simplified scenario of what I'm trying to achieve:

  • I have an algorithm that revalues items based on some parameters
  • There are up to one billion items so they are kept in an IMap
  • There are a hundred thousands parameters also kept in an IMap
  • Each node has a complete copy of all parameters in the form of QueryCache to speed things up
  • A request to the app comes in, to change a couple of parameters and revalue all items
  • Parameters are changed by simple map.put(), then the algorithm in the form of EntryProcessor is invoked on the items on each node
  • This won't work, as the updates to QueryCache are asynchronous, so sometimes the algorithm will use old parameter values
public void handleRevaluationRequest(Object parametersChangeInstructions) {
    TransactionContext transaction = hazelcastInstance.newTransactionContext();
    transaction.beginTransaction();
    TransactionalMap parameters = transaction.getMap("parameters");
    parameters.set(...); // changes to a few parameters
    parameters.put(...); // adding a few different parameters
    transaction.commitTransaction();

    IMap items = hazelcastInstance.getMap("items");
    items.executeOnEntries(new RevaluationProcessor());
    // processor uses new and/or existing parameters from QueryCache to revalue items
    // but won't always see the changes to parameters that we've just made
}

Is there a way to achieve something like that? Maybe instead of QueryCache using a different data structure will be more appriopriate to achieve synchronous "replication" of reference data that can be used by EntryProcessor.

Wojciech Gdela
  • 359
  • 2
  • 9
  • 'EntryProcessor' run on members so they don't use the client side value but server side one so EntryProcessor always see the updated value, unless you set a vlaue while creating it & send it over? – Gokhan Oner Feb 28 '19 at 21:37
  • @GokhanOner QueryCaches are asynchronous. There is no guarantee, that just after transaction the value will be already propagated. Also, commit propagation takes some time – T. Gawęda Mar 01 '19 at 10:50
  • What I'm saying is that your QueryCache is in your local node, most probably a Hazelcast client. When an entry updated, QueryCache receives the event asynchronously yes, but when you send an EntryProcessor, it doesn't run on your node, it runs on member. If the data changed already in the cluster, EP see the updated data. can you share an example code so I can see what you're trying to do & what is not working? – Gokhan Oner Mar 01 '19 at 17:30
  • @GokhanOner See example code [here](https://gist.github.com/gdela/a56a30dc77eeebf86b457b8fd65ebb8f), it shows how even on a single node (without any clients) this delay in updating QueryCaches can be observed. – Wojciech Gdela Mar 05 '19 at 13:21

1 Answers1

0

When you do a map.put and run EntryProcessor afterwards, the EP runs on the key-value store on server side, so it always works on the last updated value on server. Update to QueryCache via map.put is asynchronous and not related to what you do in EntryProcessor and when.

Additionally for your information, EntryProcessor runs on partition thread, which means the thread that is responsible for updating the value is also responsible for running the EntryProcessor. So, while an EntryProcessor is running, no other thread can update that value.

wildnez
  • 1,078
  • 1
  • 6
  • 10
  • All correct, but the question remains unanswered: can we somehow make sure that updates to `IMap` were propagated to all `QueryCaches`? Or make the `QueryCaches` synchronous? – Wojciech Gdela Mar 05 '19 at 13:27
  • Try setting delay-seconds to 0, that is the closest to being synchronous you can get. QueryCache population of data is based on events, therefore they will always be asynchronous due to the nature of event listener paradigm. – wildnez Mar 07 '19 at 04:03