2

I have the next environment:

  1. Liferay 7.1 DXP with a clustered license (bundle with wildfly 16.0)
  2. This bundles running in the same machine (2 nodes) on the localhost but different ports
  3. Each node have the same database (postgresql)
  4. Separate running elasticsearch on the machine
  5. Shared folder for documents and media
  6. In the portal-setup-wizard.properties i added "cluster.link.enabled=true" and "ehcache.replicator.properties.com.liferay.portal.kernel.webserver.WebServerServletToken=replicatePuts=true" in order to replicate cache 7.Deployed sample portlet in order to test cache replication

I tried the next steps to test cache replication:

  1. Run the first node.
  2. After that run the second node.
  3. Go to localhost for the fist node
  4. Go to localhost for the second node
  5. See next message in the logs:

[Incoming-2,liferay-channel-control,WS-5459-64327][JGroupsReceiver:91] Accepted view MergeView::[WS-5459-18884|3] (2) [WS-5459-18884, WS-5459-64327], 2 subgroups: [WS-5459-64327|1] (2) [WS-5459-64327, WS-5459-18884], [WS-5459-18884|2] (1) [WS-5459-18884]

  1. In the test portlet on the first node, through debug mode, I added cache through MultiVMPoolUtil:

MultiVMPoolUtil.getPortalCache("com.liferay.portal.kernel.webserver.WebServerServletToken").put("1","1")

  1. And in the second node I tried to get value from this cache:

MultiVMPoolUtil.getPortalCache("com.liferay.portal.kernel.webserver.WebServerServletToken").getKeys()

But there is no key "1" on this cache on the second node, but if I tried to remove using the same API, .remove("1"), on the first node this value will be removed.

The question is how to configure cache replication for put operation?

atford1
  • 21
  • 1

1 Answers1

2

ClusterLink doesn't work like this, it's rather "invalidating" caches, not "replicating" them:

If you modify object "1" on node1, node2 will get a notification that "1" was changed and - if it has this object cached - will simply drop it from its own cache. Only in the event of "1" being required subsequently will the cache-miss be detected and the object retrieved from the database (or other persistent storage).

In case nobody on node2 ever asks for the object, nothing is being retrieved on node2.

Further, if the cache on node1 overflows (cache is not unlimited, in fact, it might be configured to size 0) you can't even assume that you'll be able to retrieve this object from the cache on node1 forever.

So, your observation is correct: Changes or removals in one node will remove the object with the given key from all caches. That's how it's implemented, and it's quite useful: No need to cache something that you might never be accessing on a given machine.

I believe I've heard of actual cache replication a long time ago, e.g. it might be configurable. But I've never attempted it, as it's just not required, and invalidation doesn't impose a huge burden.

Olaf Kock
  • 46,930
  • 8
  • 59
  • 90