Having a Hazelcast cluster, how do I make sure that the changes I made to an IMap
are completely propagated to QueryCaches
on all of the cluster nodes, before I invoke an EntryProcessor
that uses the changed data from those QueryCaches
?
A simplified scenario of what I'm trying to achieve:
- I have an algorithm that revalues items based on some parameters
- There are up to one billion items so they are kept in an
IMap
- There are a hundred thousands parameters also kept in an
IMap
- Each node has a complete copy of all parameters in the form of
QueryCache
to speed things up - A request to the app comes in, to change a couple of parameters and revalue all items
- Parameters are changed by simple
map.put()
, then the algorithm in the form ofEntryProcessor
is invoked on the items on each node - This won't work, as the updates to
QueryCache
are asynchronous, so sometimes the algorithm will use old parameter values
public void handleRevaluationRequest(Object parametersChangeInstructions) {
TransactionContext transaction = hazelcastInstance.newTransactionContext();
transaction.beginTransaction();
TransactionalMap parameters = transaction.getMap("parameters");
parameters.set(...); // changes to a few parameters
parameters.put(...); // adding a few different parameters
transaction.commitTransaction();
IMap items = hazelcastInstance.getMap("items");
items.executeOnEntries(new RevaluationProcessor());
// processor uses new and/or existing parameters from QueryCache to revalue items
// but won't always see the changes to parameters that we've just made
}
Is there a way to achieve something like that? Maybe instead of QueryCache
using a different data structure will be more appriopriate to achieve synchronous "replication" of reference data that can be used by EntryProcessor
.